Shadow AI Is No Longer the CISO's Problem — It's the Board's Liability

In my previous article, I wrote about the CISO's dilemma: how we speak a language of risk while boards hear only revenue. The feedback I received confirmed what most of us already know — the communication gap is real, it is persistent, and it has consequences.
But something has changed. A new threat has entered the room, and this time it didn't wait for a boardroom presentation to make itself known. It walked in through every employee's laptop, every SaaS subscription approved without IT review, and every AI assistant quietly embedded into daily workflows.
It's called Shadow AI. And it is no longer a cybersecurity problem the CISO can manage in isolation. It has become a governance liability that sits squarely on the Board's shoulders — whether they know it or not.
What Shadow AI Actually Means
Shadow AI refers to the use of artificial intelligence tools and services within an organisation without the knowledge, approval, or oversight of IT and security teams. Think of employees pasting customer data into ChatGPT to draft a proposal. Developers using an unsanctioned AI coding assistant that sends code snippets to external servers. Finance teams feeding sensitive spreadsheets into AI summarisation tools because it saves them two hours on a Monday morning.
None of this is malicious. All of it is dangerous.
The problem with Shadow AI is not that employees are breaking rules. The problem is that organisations have created conditions where the path of least resistance — the fast, productive, AI-assisted path — runs directly through a governance blind spot.
In my doctoral research on AI governance and enterprise cyber risk (I am currently completing a DBA in Artificial Intelligence, with a dissertation focused specifically on Shadow AI and the security ownership gap), I see this pattern repeated across sectors and organisation sizes. The technology has outpaced the governance. And the gap between them is where incidents are born.
From Shadow IT to Shadow Agents: The Escalation Nobody Is Ready For
For years, security teams have managed Shadow IT — the use of unsanctioned applications and cloud services. It was a manageable problem. Visibility tools improved. Policies were written. Training was delivered.
Shadow AI is different in kind, not just degree.
Where Shadow IT introduced unauthorised tools, Shadow AI introduces autonomous behaviour. According to Google Cloud's Cybersecurity Forecast 2026, the proliferation of sophisticated, autonomous agents — often deployed by employees without corporate approval — "will create invisible, uncontrolled pipelines for sensitive data, leading to data leaks and compliance violations." [1] The report explicitly names this the "Shadow Agent" problem, warning that by 2026, it will escalate into one of the defining security challenges for enterprise organisations.
This is not a fringe prediction from a vendor with something to sell. It is the assessment of the Google Threat Intelligence Group and Mandiant Consulting, drawn from real-world incident response data.
These agents act, make decisions, and move data — often without any human in the loop at the moment it matters. And banning them is not a viable answer. As the same forecast states, banning agents "only drives usage off the corporate network, eliminating visibility entirely." [1] What is needed is governance — clear, proportionate, and actually enforced governance.
Why This Is Now a Board-Level Issue
Here is where boards need to pay attention, and I say this with the same directness I brought to the boardroom communication problem in my previous piece.
NIS2 and Personal Liability
Under the NIS2 Directive (Directive EU 2022/2555), organisations classified as essential or important entities are required to implement cybersecurity risk management measures covering the security of their network and information systems, including supply chain security. Article 20 goes further than its predecessor in one critical way: it places personal accountability on management bodies.
As DLA Piper's analysis of NIS2's management body rules notes, "NIS2 introduces personal accountability for members of the management bodies," requiring that they not only approve but actively oversee the implementation of cybersecurity risk management measures. [2] Article 20(1) mandates this oversight as a standing obligation — not an annual event.
The practical consequence is stark. When an employee sends client data to an external AI service without authorisation, that is not simply a rogue action. In regulatory terms, it is a supply chain event — and Article 21(2)(d) of NIS2 explicitly requires entities to address security in their supply chain relationships, including the services they use. [3] If such an event results in a reportable incident, regulators will not direct their questions at the employee. They will examine the governance structure above them, and whether management was fulfilling its documented oversight obligations.
Fines for essential entities under NIS2 can reach €10 million or 2% of global annual turnover — whichever is greater. [3] And member states have latitude to gold-plate beyond the EU baseline. Germany's national implementation act, which entered into force in December 2025, allows management bodies to be held personally responsible for failures to implement and oversee required measures. [4]
The EU AI Act: Another Layer of Exposure
The EU AI Act adds a separate but overlapping regulatory dimension. The prohibition on certain AI practices has applied since February 2025. GPAI model obligations became effective in August 2025. Most remaining obligations, including the full transparency framework under Article 50, apply from August 2026. [5]
Organisations that deploy AI systems in ways that affect individuals — including employees — carry obligations around transparency, risk classification, and human oversight. Deploying an AI tool to screen job applications, assess employee performance, or make customer-facing decisions without proper governance is not just a cybersecurity risk. It is a direct regulatory exposure under a framework with fines of up to €35 million or 7% of global annual turnover for the most serious violations. [6]
Critically, analysis of current enterprise readiness suggests that over half of organisations lack systematic inventories of AI systems currently in production. [7] You cannot govern, classify, or demonstrate compliance for AI systems you do not know exist. Shadow AI makes this problem structurally worse.
What the Security Ownership Gap Looks Like in Practice
In conversations with security leaders across Europe, a consistent picture emerges. The CISO raises Shadow AI as a risk. It is acknowledged. It is perhaps added to a risk register. And then it sits there, while the organisation continues to grow its AI footprint organically — department by department, tool by tool — without a coherent framework for who owns the risk, who monitors it, and who is accountable when something goes wrong.
This is what I describe in my dissertation as the security ownership gap: the space between identifying a risk and actually governing it. It is not a failure of intent. It is a structural failure. Security teams are asked to monitor a landscape they cannot fully see. Business teams adopt tools they do not fully understand. And boards approve digital transformation strategies without visibility into the shadow infrastructure growing beneath them.
The gap is not filled by a policy document. It is filled by a decision — a deliberate, board-level decision to treat AI governance as a strategic priority, not a compliance checkbox.
What Good Governance Actually Looks Like
There is no simple answer, but there is a defensible starting framework.
Visibility before control. You cannot govern what you cannot see. The first step is an honest inventory of AI tools in use across the organisation — not just the ones IT has approved, but the ones employees are actually using. This requires a combination of technical discovery and cultural honesty: making it safe to report AI tool usage without fear of punishment.
Classification before prohibition. Not every AI tool carries the same risk. A framework that classifies AI usage by data sensitivity, autonomy level, and external connectivity allows proportionate responses. An AI assistant that summarises internal meeting notes is a different risk profile from an agent with API access to your CRM that autonomously contacts customers.
Ownership before accountability. Every AI tool in use should have a named owner — not just a user, but someone accountable for its governance. This means understanding what data it processes, what its terms of service permit, and what the breach notification obligations are if something goes wrong.
Documented decisions at board level. Where AI tools carry material risk, the decision to accept, mitigate, transfer, or avoid that risk should be documented and reviewed at the appropriate governance level. Under NIS2 Article 20, supervisory authorities require digital evidence showing that the board set direction, asked questions, issued challenges, and closed the loop on escalation. [8] Signing off on a policy once a year is no longer sufficient.
The Honest Conversation That Needs to Happen
Most CISOs I know are not surprised by Shadow AI. They have been watching it grow for two years. What surprises them is the speed of escalation — from employees using ChatGPT to draft emails, to business units running autonomous agents that touch production data.
The conversation that needs to happen is not "how do we stop employees using AI." That conversation is over, and AI won. The conversation that needs to happen is: who is accountable for AI risk in this organisation, and what decisions has the board actually made about it?
Because in the absence of that conversation, the default answer is: nobody is accountable, no decisions have been made, and the risk is accumulating quietly in every corner of the business.
That is not a technology problem. That is a governance failure. And governance failures, as NIS2 is now making very clear, have names attached to them.
A Final Thought
In my previous article, I described the CISO's dilemma as the struggle to be heard in a boardroom that only hears revenue. Shadow AI changes that dynamic.
For the first time, the risk is not something the CISO is warning about in the future tense. It is something happening right now, at scale, across every organisation that has not yet built a framework to govern it.
AI is simultaneously the most powerful productivity tool most organisations have ever deployed and one of the most unmanaged risk surfaces they have ever created. The revenue conversation and the risk conversation are about to collide.
The question is not whether boards will have to engage with this. They will. The question is whether they will engage with it before an incident — or after one.
I know which conversation I would rather be in.
References
[1] Google Cloud, Cybersecurity Forecast 2026, November 2025. Available at: https://cloud.google.com/blog/topics/threat-intelligence/cybersecurity-forecast-2026/
[2] DLA Piper, NIS2 Directive Explained: Part 2 – Management Bodies Rules, November 2025. Available at: https://www.dlapiper.com/en/insights/publications/2025/11/nis2-directive-explained-part-2-management-bodies-rules
[3] Codific, NIS2 Directive: Compliance Guide, Fines & Scope, November 2025. Available at: https://codific.com/nis-2-directive-compliance-guide-fines-scope/
[4] Morrison Foerster, Flipping the NIS2 Switch: What Germany's Implementation Means for 2026 Compliance, December 2025. Available at: https://www.mofo.com/resources/insights/251208-flipping-the-nis2-switch-what-germanys-implementation
[5] European Commission, Navigating the AI Act, 2025. Available at: https://digital-strategy.ec.europa.eu/en/faqs/navigating-ai-act
[6] DLA Piper, Latest Wave of Obligations Under the EU AI Act Take Effect, August 2025. Available at: https://www.dlapiper.com/en-us/insights/publications/2025/08/latest-wave-of-obligations-under-the-eu-ai-act-take-effect
[7] Secure Privacy, EU AI Act 2026 Compliance Guide: Key Requirements Explained, 2025. Available at: https://secureprivacy.ai/blog/eu-ai-act-2026-compliance
[8] ISMS.online, NIS 2 Article 20: What Personal Cyber Liability Means for Every Board Director, October 2025. Available at: https://www.isms.online/nis-2/enforcement/board-liability/article-20/
Angelos Varthalitis is Chief Information Security Officer at Ovivio NL (KidsKonnect), a leading European childcare SaaS platform. With over 20 years of cybersecurity experience across SaaS, energy, and transportation sectors, he holds certifications including CISM and NIS2 Lead Implementer. He is currently completing a Doctorate in Business Administration specialising in Artificial Intelligence, with a dissertation focused on Shadow AI and the security ownership gap in enterprise cyber risk governance. He writes at www,varthalitis,eu.

