AI Governance Without the CISO Is Just Innovation Theatre
If your AI governance committee doesn’t include your CISO, it isn’t a governance committee — it’s a project team with a compliance-shaped blind spot.
Every enterprise wants to say it governs AI responsibly. Most of them are performing the appearance of governance while structurally excluding the one function equipped to operationalise it.
Across industries, organisations are standing up AI governance committees, drafting usage policies, and appointing oversight leads. The energy is real. The problem is who’s at the table — and who isn’t. In too many cases, AI governance is being led by Chief Data Officers, Chief Technology Officers, or innovation teams. Security leadership, when included at all, is added as an afterthought: a reviewer, not an architect.
The result is governance frameworks that look compelling on slides but have no operational teeth. They address ethics, bias, and transparency — all important — but skip the controls that determine whether an AI system is actually safe to run in production: identity management, data classification, access governance, incident response, and regulatory exposure mapping. These are not theoretical concerns. They are the CISO’s daily operating reality.
This matters now more than ever. The EU AI Act’s high-risk obligations become enforceable in August 2026, requiring risk management systems, technical documentation, human oversight, and cybersecurity controls for AI systems operating in sensitive domains.[1]
NIS2’s management accountability provisions are already active across transposed jurisdictions. And the threat landscape is moving faster than any governance committee’s quarterly meeting cadence.
The Structural Exclusion Problem
Half of organisations have now established dedicated AI governance committees.[2]
That sounds like progress — until you examine who sits on them. The typical composition includes legal, compliance, data, engineering, and business leadership. Security is frequently listed as a “stakeholder” rather than a co-owner. This mirrors a pattern familiar from cloud adoption a decade ago: business units moved to the cloud, IT enabled it, and security was invited to review the architecture after procurement was complete.
The consequences were predictable then, and they are predictable now. When AI governance is structured without security co-ownership, you get usage policies without enforcement mechanisms, data processing agreements without classification controls, risk assessments that consider reputational and ethical risk but not operational or cyber risk, and vendor evaluations that assess capability but not the security posture of the AI supply chain.
The governance ownership question remains unresolved for many organisations. As one analysis put it, accountability is fragmented across the CTO, CDO, CRO, and CISO — and when everyone assumes someone else is responsible, no one actually governs.[3]
Shadow AI: The Governance Gap Made Visible
If there is a single metric that exposes the gap between governance theatre and operational governance, it is the scale of Shadow AI.
Ninety-eight percent of organisations report employees using unsanctioned AI applications.[4]
Three out of four CISOs have discovered unapproved GenAI tools already running in their environments, often with embedded credentials, API integrations, or OAuth tokens that connect directly into enterprise systems.[5]
Nearly half of employees admit to adopting AI tools without employer approval, with a significant proportion sharing sensitive enterprise data including research datasets, employee records, and financial information with consumer-grade tools.[6]
This is not an awareness problem. It is a structural one. Employees use unsanctioned AI tools because the organisation has not provided governed alternatives fast enough. When approved tools are made available with appropriate guardrails, unauthorised usage drops dramatically — one healthcare system reported an 89% reduction after deploying sanctioned alternatives.[7]
Shadow AI is not evidence of rogue employees. It is evidence of a governance model that excluded the function responsible for managing exactly this kind of risk: the CISO.
The AI Identity Crisis Security Leaders Are Already Managing
Beyond Shadow AI, there is a deeper operational challenge that AI governance committees without security representation are not equipped to address: AI systems now function as identities within enterprise environments.
Seventy-one percent of organisations report that AI tools have access to core business systems such as CRM platforms and ERP systems. But only 16% say that access is governed effectively.[8]
Nearly half of security leaders have already observed AI agents exhibiting unintended or unauthorised behaviour, and a third have dealt with an actual security incident or near-miss in the past year tied to AI systems.[9]
These AI identities operate at machine speed, with privilege levels that in many cases no one explicitly granted. They don’t follow human access patterns. They don’t leave conventional audit trails. And they create a class of insider risk that traditional governance frameworks were never designed to handle.
This is the CISO’s domain. Identity governance, privilege management, access control, and insider threat detection are core security competencies. An AI governance committee that does not include the CISO is, by definition, ungoverned on the dimension that determines whether AI systems are operationally safe.
The Regulatory Convergence the C-Suite Hasn’t Mapped
The EU AI Act, NIS2, and GDPR are converging on a single operational reality: organisations deploying AI systems in Europe face overlapping governance obligations that no single function can manage alone.
The EU AI Act’s high-risk obligations, enforceable from August 2026, require risk management systems, data governance, technical documentation, human oversight, and — critically — cybersecurity and robustness requirements for AI systems operating in sensitive domains.[10]
NIS2 already requires management bodies to approve and oversee cybersecurity risk management measures, with provisions for personal liability in cases of serious negligence. When the AI systems an organisation deploys become vectors for cyber risk — through data exposure, privilege escalation, or supply chain compromise — the question of who approved and oversaw their deployment becomes a board-level liability question.
GDPR’s requirements around automated decision-making, data protection impact assessments, and lawful basis for processing all intersect with AI deployment. The CISO is typically the function that operationalises data protection controls and coordinates with the DPO.
No CDO or CTO, however capable, can single-handedly navigate this convergence. It requires the security leader who understands how controls are implemented, how incidents are detected and reported, and how regulatory exposure translates into operational risk.
What Governance Without the CISO Actually Looks Like
The pattern is consistent. Organisations that exclude the CISO from AI governance produce frameworks that are structurally incomplete:
They draft AI usage policies that define acceptable use but lack monitoring or enforcement capabilities. They conduct vendor risk assessments for AI tools that evaluate functionality and data processing agreements but not the security architecture of the model’s supply chain. They build AI risk registers that capture ethical and reputational risk but not cyber risk scenarios: prompt injection, data poisoning, model exfiltration, or credential compromise via AI integrations. They approve AI deployments without identity governance, access controls, or incident response playbooks specific to AI-related incidents.
The board receives a governance dashboard that shows committee activity, policy coverage, and adoption metrics. What it does not show is whether the AI systems running in the enterprise are secure, whether their access is governed, or whether the organisation could detect and respond to an AI-related breach within its regulatory reporting timelines.
That gap is not a CISO problem. It is a board-level risk that the CISO has been structurally prevented from addressing.
The Boardroom Alignment Problem
There is a growing disconnection between how boards perceive AI governance maturity and the reality on the ground.
A survey of Fortune 500 executives found that 70% of organisations now have AI risk committees and 41% have dedicated AI governance teams — yet only 14% report being fully ready for AI deployment.[11]
Proofpoint’s 2025 Voice of the CISO report showed that boardroom alignment with CISOs declined from 84% in 2024 to 64%, even as boards elevated business valuation as their top concern following a cyberattack.[12]
The WEF Global Cybersecurity Outlook 2026 identified a telling perception gap: CEOs now rank AI vulnerabilities as their second-highest cyber risk, while CISOs do not even list it in their top three.[13]
This is not a contradiction. It reflects the fact that CISOs are managing AI risk operationally — through identity controls, access governance, and incident response — while boards are experiencing it as a strategic abstraction. When CISOs are excluded from the governance structure that frames AI risk for the board, both sides lose: the board gets an incomplete picture, and the CISO loses the mandate to act.
What Operational AI Governance Actually Requires
Effective AI governance is not a committee. It is an operational capability that must be embedded across the enterprise. And it requires the CISO as a co-architect, not a reviewer.
AI identity governance should treat AI agents, copilots, and automated systems as non-human identities subject to the same lifecycle controls as human users: provisioning, least-privilege enforcement, access certification, and deprovisioning. This is identity and access management. It belongs to the CISO.
AI-specific incident response requires playbooks that address prompt injection, data exfiltration via AI tools, model manipulation, and the disclosure obligations that arise when AI systems process personal data or operate in NIS2-regulated environments. This is incident response. It belongs to the CISO.
AI supply chain security means evaluating not just the vendor’s data processing agreements but their model provenance, training data practices, API security, and the third-party components embedded in their AI stack. This is supply chain risk management. It belongs to the CISO.
AI risk quantification requires translating AI-specific threats into the same risk language the board uses for every other enterprise risk: financial exposure, regulatory penalty, operational disruption, and reputational impact. This is cyber risk communication. It belongs to the CISO.
None of this can happen from outside the governance structure.
What CISOs Should Do Now
The challenge for CISOs is not to wait for an invitation. It is to demonstrate that AI governance without security co-ownership is incomplete by design — and to present the board with a clear, risk-based argument for inclusion.
Claim the AI identity governance mandate. If AI systems are accessing enterprise data and business-critical applications, they fall within the CISO’s existing scope. Document the AI identities operating in your environment, their access levels, and the gaps in their governance. Present this to the board as an operational risk, not a technology update.
Map the regulatory convergence. Show the board where the EU AI Act, NIS2, and GDPR obligations overlap on AI governance and where the current governance structure has no owner for those intersections. Make the accountability gap visible.
Build AI-specific incident response capability now. Do not wait for a governance committee to authorise it. Develop playbooks for AI-related incidents, run tabletop exercises, and ensure your team can detect and respond to AI-specific threats within your existing regulatory reporting timelines.
Reframe the conversation. The CISO’s role in AI governance is not to slow down innovation. It is to ensure that AI deployments are operationally safe, legally defensible, and resilient. The argument is not “you need security approval” — it is “you need someone who can tell the board whether these systems are actually governed.”
The Real Test
The organisations that will navigate the AI governance challenge successfully are not those with the most impressive committee structures or the most comprehensive policy documents. They are the ones that have embedded security leadership into AI governance from the beginning — not as a checkbox, but as an operational co-owner.
AI governance without the CISO is not governance. It is innovation theatre: a performance of oversight that leaves the organisation structurally exposed to the risks that governance was supposed to manage.
The stage is set. The question is whether CISOs will be in the room where the decisions are made — or left to manage the consequences of decisions made without them.
References
[1] European Commission — AI Act Implementation Timeline
[2] IANS Research — The CISO’s Expanding AI Mandate: Leading Governance in 2026
[4] Vectra AI — Shadow AI: Risks, Costs, and Enterprise Governance (2026)
[5] Saviynt / Cybersecurity Insiders — 2026 CISO AI Risk Report
[6] CIO.com — Roughly Half of Employees Are Using Unsanctioned AI Tools (BlackFog Survey, January 2026)
[7] Vectra AI — Shadow AI: Risks, Costs, and Enterprise Governance (Healthcare Brew / 2026 data)
[8] Saviynt / Cybersecurity Insiders — 2026 CISO AI Risk Report
[9] Saviynt / Cybersecurity Insiders — 2026 CISO AI Risk Report
[10] Securiti — EU AI Act: What Changes Now vs What Starts in 2026
[11] Sedgwick 2026 Forecasting Report, cited in Sengupta, AI Governance in 2026 (Medium, January 2026)
[12] Proofpoint — 2025 Voice of the CISO Report
[13] World Economic Forum — Global Cybersecurity Outlook 2026 (via Fortinet CISO Collective)


