<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Angelos Varthalitis]]></title><description><![CDATA[Practical insights on cybersecurity compliance, NIS2, and information security — helping European organizations turn regulatory complexity into operational resilience.]]></description><link>https://www.varthalitis.eu</link><generator>Substack</generator><lastBuildDate>Sun, 05 Apr 2026 15:13:58 GMT</lastBuildDate><atom:link href="https://www.varthalitis.eu/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Angelos Varthalitis]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[angelosvarthalitis@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[angelosvarthalitis@substack.com]]></itunes:email><itunes:name><![CDATA[Angelos Varthalitis]]></itunes:name></itunes:owner><itunes:author><![CDATA[Angelos Varthalitis]]></itunes:author><googleplay:owner><![CDATA[angelosvarthalitis@substack.com]]></googleplay:owner><googleplay:email><![CDATA[angelosvarthalitis@substack.com]]></googleplay:email><googleplay:author><![CDATA[Angelos Varthalitis]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Who Owns AI Security in Your Organization? (Spoiler: Probably No One)]]></title><description><![CDATA[The gap no one is talking about &#8212; and the certification built to close it.]]></description><link>https://www.varthalitis.eu/p/who-owns-ai-security-in-your-organization</link><guid isPermaLink="false">https://www.varthalitis.eu/p/who-owns-ai-security-in-your-organization</guid><dc:creator><![CDATA[Angelos Varthalitis]]></dc:creator><pubDate>Thu, 02 Apr 2026 05:20:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!AWzf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d6d89f0-9ccc-4905-b387-95ee5398d9f1_800x800.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There&#8217;s a question I&#8217;ve been asking executives, security managers, and IT directors lately. It goes like this:</p><p><em>&#8220;Your company is using AI. Who is personally responsible if something goes wrong with it?&#8221;</em></p><p>The room usually goes quiet.</p><p>Not because people don&#8217;t care. But because the honest answer, in most organizations, is: <strong>nobody, really.</strong> The developers built it. The vendor manages it. The CISO &#8220;oversees it&#8221; in the broadest possible sense. And somewhere in that triangle of vague accountability, real risk is accumulating &#8212; quietly, every day.</p><p>This is the AI governance gap. And in 2025, it&#8217;s no longer theoretical.</p><div><hr></div><h2>AI Is Already Inside Your Enterprise. Is It Secure?</h2><p>Think about where AI systems are actually operating right now in the average mid-to-large organization:</p><ul><li><p>Customer service chatbots handling sensitive queries</p></li><li><p>Fraud detection models making real-time financial decisions</p></li><li><p>HR tools screening candidates and summarizing performance reviews</p></li><li><p>GenAI assistants with access to internal documents and email</p></li></ul><p>Each of these systems has an attack surface. Each can be manipulated, poisoned, abused, or made to leak data &#8212; through techniques most security teams have never trained for.</p><p><strong>Prompt injection. Model inversion. Data poisoning. Adversarial inputs. Deepfake-enabled social engineering.</strong></p><p>These aren&#8217;t science fiction. They&#8217;re in the MITRE ATLAS framework. They&#8217;re in the OWASP LLM Top 10. They&#8217;re already being exploited in the wild.</p><p>And yet, most organizations approach AI security the same way they approached cloud security in 2012: as an afterthought.</p><div><hr></div><h2>The Regulation Is Coming. Ready or Not.</h2><p>The EU AI Act is now in force. NIST has published its AI Risk Management Framework. ISO/IEC 42001 defines requirements for AI management systems. Boards are asking questions. Regulators are drafting guidance.</p><p>The compliance window is closing &#8212; and organizations that haven&#8217;t built internal AI governance capacity are going to feel it.</p><p>Here&#8217;s the uncomfortable truth: <strong>you cannot comply with AI regulation using traditional cybersecurity skills alone.</strong> AI risk requires a different vocabulary, a different threat model, and a different kind of leadership.</p><div><hr></div><h2>Enter the C)AICSO&#8482;</h2><p>The <strong>Certified AI Cybersecurity Officer (C)AICSO&#8482;)</strong> by Mile2 is the first professional certification designed specifically for this role.</p><p>It&#8217;s not a developer course. It doesn&#8217;t require you to write a single line of code.</p><p>It&#8217;s a <strong>20-module strategic leadership program</strong> for the professionals who need to govern, defend, and audit AI systems at an organizational level &#8212; CISOs, Risk Managers, Auditors, Security Architects, and the growing wave of AI Governance Officers being appointed across industries.</p><p>Here&#8217;s what makes it different from every other AI security course out there:</p><p><strong>It treats GenAI as a potential insider threat vector.</strong> Most courses talk about AI as a tool to defend against external attackers. C)AICSO flips the frame: the AI system itself &#8212; its inputs, outputs, integrations, and supply chain &#8212; is part of your attack surface.</p><p><strong>It uses Mile2&#8217;s Progressive AI Risk Management Framework.</strong> This isn&#8217;t a static checklist. It&#8217;s a living methodology for identifying, evaluating, and mitigating AI risks as the technology evolves &#8212; which it does, fast.</p><p><strong>It maps directly to the frameworks regulators care about.</strong> NIST AI RMF. ISO/IEC 42001. EU AI Act. OWASP LLM Top 10. MITRE ATLAS. The course connects theory to the compliance landscape you&#8217;ll actually be navigating.</p><p><strong>It includes Red Teaming exercises designed for managers.</strong> Simulation labs, attack scenarios, and governance playbooks &#8212; all structured for strategic decision-makers, not penetration testers.</p><div><hr></div><h2>What You&#8217;ll Be Able to Do</h2><p>By the end of the program, certified professionals will be equipped to:</p><ul><li><p>Build a comprehensive framework for assessing and mitigating AI security risks</p></li><li><p>Red team and develop incident response plans for LLM and GenAI systems</p></li><li><p>Apply NIST and ISO frameworks to real AI workflows in the enterprise</p></li><li><p>Securely integrate GenAI tools without creating shadow AI exposure</p></li><li><p>Design governance blueprints for multi-stakeholder coordination and board-level oversight</p></li></ul><p>That last one matters more than people realize. The question of <em>who owns AI risk</em> &#8212; the CISO, the CIO, the CTO, a new Chief AI Officer &#8212; is one of the defining organizational design questions of the next five years. The C)AICSO prepares you to be the person who answers it.</p><div><hr></div><h2>The Bottom Line</h2><p>AI is not a future problem. It&#8217;s a present one.</p><p>The organizations that will navigate this transition well are not the ones with the most advanced AI &#8212; they&#8217;re the ones with the clearest ownership, the strongest governance, and professionals who understand the full threat landscape.</p><p>The C)AICSO&#8482; is how you build that capacity. Not just for yourself, but for your entire security program.</p><div><hr></div><p><em>The C)AICSO&#8482; Ultimate Combo &#8212; including course access, exam voucher, and one free retake &#8212; is available at <strong><a href="https://academy.avsecadvisory.eu/product/mile2-caicso-ai-cybersecurity-officer-self-study/">academy.avsecadvisory.eu</a></strong>.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AWzf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d6d89f0-9ccc-4905-b387-95ee5398d9f1_800x800.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AWzf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d6d89f0-9ccc-4905-b387-95ee5398d9f1_800x800.jpeg 424w, https://substackcdn.com/image/fetch/$s_!AWzf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d6d89f0-9ccc-4905-b387-95ee5398d9f1_800x800.jpeg 848w, https://substackcdn.com/image/fetch/$s_!AWzf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d6d89f0-9ccc-4905-b387-95ee5398d9f1_800x800.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!AWzf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d6d89f0-9ccc-4905-b387-95ee5398d9f1_800x800.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AWzf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d6d89f0-9ccc-4905-b387-95ee5398d9f1_800x800.jpeg" width="800" height="800" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7d6d89f0-9ccc-4905-b387-95ee5398d9f1_800x800.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:800,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:98092,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.varthalitis.eu/i/192926881?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d6d89f0-9ccc-4905-b387-95ee5398d9f1_800x800.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!AWzf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d6d89f0-9ccc-4905-b387-95ee5398d9f1_800x800.jpeg 424w, https://substackcdn.com/image/fetch/$s_!AWzf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d6d89f0-9ccc-4905-b387-95ee5398d9f1_800x800.jpeg 848w, https://substackcdn.com/image/fetch/$s_!AWzf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d6d89f0-9ccc-4905-b387-95ee5398d9f1_800x800.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!AWzf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7d6d89f0-9ccc-4905-b387-95ee5398d9f1_800x800.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em><br></em></p>]]></content:encoded></item><item><title><![CDATA[Europe Has a Cybersecurity Skills Problem. Certifications Are Part of the Answer.]]></title><description><![CDATA[And why Mile2 makes the difference]]></description><link>https://www.varthalitis.eu/p/europe-has-a-cybersecurity-skills</link><guid isPermaLink="false">https://www.varthalitis.eu/p/europe-has-a-cybersecurity-skills</guid><dc:creator><![CDATA[Angelos Varthalitis]]></dc:creator><pubDate>Fri, 27 Mar 2026 14:52:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cLT9!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6e49acf-9c42-4d01-b9e7-1a13a04d9488_400x400.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!YqXX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af7489a-5186-4f5d-93c4-16a9dba08673_418x186.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!YqXX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af7489a-5186-4f5d-93c4-16a9dba08673_418x186.png 424w, https://substackcdn.com/image/fetch/$s_!YqXX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af7489a-5186-4f5d-93c4-16a9dba08673_418x186.png 848w, https://substackcdn.com/image/fetch/$s_!YqXX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af7489a-5186-4f5d-93c4-16a9dba08673_418x186.png 1272w, https://substackcdn.com/image/fetch/$s_!YqXX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af7489a-5186-4f5d-93c4-16a9dba08673_418x186.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!YqXX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af7489a-5186-4f5d-93c4-16a9dba08673_418x186.png" width="418" height="186" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3af7489a-5186-4f5d-93c4-16a9dba08673_418x186.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:186,&quot;width&quot;:418,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3467,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.varthalitis.eu/i/192320485?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af7489a-5186-4f5d-93c4-16a9dba08673_418x186.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!YqXX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af7489a-5186-4f5d-93c4-16a9dba08673_418x186.png 424w, https://substackcdn.com/image/fetch/$s_!YqXX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af7489a-5186-4f5d-93c4-16a9dba08673_418x186.png 848w, https://substackcdn.com/image/fetch/$s_!YqXX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af7489a-5186-4f5d-93c4-16a9dba08673_418x186.png 1272w, https://substackcdn.com/image/fetch/$s_!YqXX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3af7489a-5186-4f5d-93c4-16a9dba08673_418x186.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>There&#8217;s a conversation I keep having with CISOs and IT leaders across Southern Europe. It goes something like this:</p><p>&#8220;We know NIS2 is coming. We know we need to upskill our teams. But where do we even start?&#8221;</p><p>The honest answer, until recently, was: fly to London, Amsterdam, or Washington D.C. Pay thousands. Hope the training sticks.</p><p>That never sat right with me.</p><p>The certification landscape is broken &#8212; especially for mid-career professionals</p><p>Let&#8217;s be direct. The cybersecurity certification market has a structural problem. It&#8217;s dominated by two or three household names that have become gatekeepers rather than enablers. Multi-thousand-euro exam fees. Rigid experience requirements that punish career changers. Renewal cycles designed to extract revenue, not validate competence.</p><p>Meanwhile, the actual threat landscape doesn&#8217;t care about your acronyms. Ransomware groups aren&#8217;t checking whether your SOC analyst holds a particular certificate before they hit your infrastructure.</p><p>What matters is whether your people genuinely understand information security management, risk assessment, governance frameworks, and incident response &#8212; and whether they can prove it through a credible, internationally accredited credential.</p><p>Why I partnered with Mile2</p><p>When I evaluated training providers for AVSecAdvisory, I had three non-negotiable criteria:</p><p>Accreditation that holds up to scrutiny. Mile2 certifications are accredited under ISO/IEC 17024 through ANAB. That&#8217;s the same accreditation standard the big players use. When a regulator, auditor, or client asks &#8220;is this certification legitimate?&#8221;, the answer is unambiguous.</p><p>Breadth across the security domain. Not just penetration testing. Not just governance. Mile2 covers the full spectrum &#8212; from hands-on security operations to ISMS auditing, from AI security governance to strategic CISO-level leadership. That matters because real security teams need diverse skills, not a monoculture of one certification track.</p><p>Accessibility. This is the one that sealed it. Mile2&#8217;s model makes professional cybersecurity certification achievable without remortgaging your house. The price point, the flexible delivery, the lack of artificial barriers to entry &#8212; it opens doors that the traditional providers keep firmly shut.</p><p>What this means in practice</p><p>Through AVSecAdvisory Academy, professionals across Greece and Europe now have access to the full Mile2 certification portfolio. That includes credentials in:</p><p>&#9;&#8729;&#9;Information Security Management Systems &#8212; for those implementing or auditing ISO 27001</p><p>&#9;&#8729;&#9;Cybersecurity Strategy and Management &#8212; for security leaders who need to communicate risk to boards</p><p>&#9;&#8729;&#9;AI Security Governance &#8212; increasingly critical as organizations adopt AI under frameworks like ISO 42001</p><p>&#9;&#8729;&#9;Security Operations and Architecture &#8212; for the practitioners defending infrastructure every day</p><p>Every certification maps to real-world frameworks: ISO 27001, NIS2, NIST, GDPR. These aren&#8217;t theoretical exercises. They&#8217;re designed to make professionals more effective at their actual jobs.</p><p>The timing isn&#8217;t coincidental</p><p>Europe is in the middle of the most significant cybersecurity regulatory shift in a decade. NIS2 is transforming how organizations across 18 sectors approach security governance. DORA is reshaping financial services. The AI Act is creating entirely new compliance requirements.</p><p>All of these frameworks demand one thing in common: competent people. Not just tools. Not just policies on paper. Trained, certified professionals who can implement, manage, and audit security programs.</p><p>The organizations that invest in their people now will be the ones that navigate this transition successfully. The ones that don&#8217;t will learn the hard way that compliance isn&#8217;t something you can buy off the shelf.</p><p>A personal note</p><p>I hold five Mile2 certifications myself &#8212; C)CSSM, C)ISMS-LA/LI, C)AICSO, C)CSSA, and C)ISSO-A. I didn&#8217;t pursue them for the letters after my name. I pursued them because the curriculum genuinely made me better at my work as a CISO and as a consultant advising organizations on security governance.</p><p>That&#8217;s the standard I want to bring to every professional who trains through AVSecAdvisory Academy. Not certification for certification&#8217;s sake, but structured learning that translates into measurable capability.</p><p>If you&#8217;re a security professional looking to level up, or an organization trying to build a credible security function in the NIS2 and AI era, this is worth exploring.</p><p>AVSecAdvisory Academy: <a href="http://academy.avsecadvisory.eu">academy.avsecadvisory.eu</a></p><p>#Cybersecurity #NIS2 #Certifications #InfoSec #Europe #Mile2 #CISO #CyberTraining #ISO27001 #GRC&#8203;&#8203;&#8203;&#8203;&#8203;&#8203;&#8203;&#8203;&#8203;&#8203;&#8203;&#8203;&#8203;&#8203;&#8203;&#8203;</p>]]></content:encoded></item><item><title><![CDATA[Your IAM Programme Was Built for Humans. It Has No Idea What to Do With Agents. ]]></title><description><![CDATA[Non-Human Identities are the new Shadow AI &#8212; ungoverned, fast-proliferating, and now the fastest-growing attack vector in enterprise infrastructure.]]></description><link>https://www.varthalitis.eu/p/your-iam-programme-was-built-for</link><guid isPermaLink="false">https://www.varthalitis.eu/p/your-iam-programme-was-built-for</guid><dc:creator><![CDATA[Angelos Varthalitis]]></dc:creator><pubDate>Mon, 23 Mar 2026 07:30:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!C0MA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22313c59-cdd6-4195-b56f-6cf52d4de51d_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!C0MA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22313c59-cdd6-4195-b56f-6cf52d4de51d_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!C0MA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22313c59-cdd6-4195-b56f-6cf52d4de51d_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!C0MA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22313c59-cdd6-4195-b56f-6cf52d4de51d_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!C0MA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22313c59-cdd6-4195-b56f-6cf52d4de51d_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!C0MA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22313c59-cdd6-4195-b56f-6cf52d4de51d_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!C0MA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22313c59-cdd6-4195-b56f-6cf52d4de51d_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/22313c59-cdd6-4195-b56f-6cf52d4de51d_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3109890,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.varthalitis.eu/i/191837573?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22313c59-cdd6-4195-b56f-6cf52d4de51d_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!C0MA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22313c59-cdd6-4195-b56f-6cf52d4de51d_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!C0MA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22313c59-cdd6-4195-b56f-6cf52d4de51d_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!C0MA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22313c59-cdd6-4195-b56f-6cf52d4de51d_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!C0MA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22313c59-cdd6-4195-b56f-6cf52d4de51d_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>There is a quiet governance crisis unfolding in enterprise environments, and most boards have not been briefed on it. It does not involve a new threat actor or a zero-day exploit. It involves a category of identity that your organisation is almost certainly creating faster than it can govern &#8212; and which your existing Identity and Access Management programme was never designed to handle.</p><p>That category is Non-Human Identities. API keys, service accounts, OAuth tokens, certificates, and &#8212; increasingly &#8212; the credentials used by AI agents operating autonomously across your infrastructure. They do not log in from recognisable locations. They do not follow normal working hours. They do not respond to behavioural anomaly detection tuned for human patterns. And they outnumber your human users by ratios that, until recently, would have seemed implausible.</p><p>As of 2025, NHIs outnumber human identities in enterprise environments at a ratio of 144 to 1 &#8212; a 44% increase from the year prior.<sup>[1]</sup> That ratio is accelerating as organisations deploy AI agents at scale. The governance infrastructure being applied to manage them has not kept pace. The result is a structural exposure that sits squarely within the scope of NIS2 Article 21 and, for many organisations, the EU AI Act &#8212; but which most compliance programmes have not yet addressed.</p><p>This is the new Shadow AI problem. Only this time, it is not employees quietly using unapproved tools. It is the tools themselves acquiring identities, permissions, and access &#8212; at machine speed, outside the visibility of your IAM programme.</p><h2>The Scale of the Problem</h2><p>Understanding why NHIs represent a governance failure requires appreciating their operational characteristics. Unlike human identities, NHIs bypass multi-factor authentication by design. They operate continuously &#8212; including at 3am, from multiple locations simultaneously, authenticating thousands of times per minute. They persist indefinitely without lifecycle management unless someone explicitly deprovisions them. And they are often granted standing privileges far beyond what any specific task requires.</p><p>The data on governance maturity is stark. According to CSA research, fewer than one in four organisations has documented and formally adopted policies for creating or removing AI identities.<sup>[2]</sup> Only 12% report high confidence in their ability to prevent attacks via NHIs.<sup>[3]</sup> A separate study found that 68% of IT security incidents now involve machine identities, and half of enterprises surveyed have experienced a breach attributable to unmanaged NHIs.<sup>[4]</sup></p><p>The attack surface extends beyond the agents themselves. Research by Entro Labs found that nearly half of all exposed secrets &#8212; the credentials NHIs use to authenticate &#8212; were discovered outside code repositories entirely: in CI/CD workflows, collaboration platforms, and, notably, SharePoint, which alone accounted for almost one in five credential exposures.<sup>[5]</sup> The implication is that the credential sprawl problem is not contained within your development environment. It is distributed across the same productivity tools your organisation runs on.</p><p>Within this population, the privilege concentration is particularly concerning. One in twenty AWS machine identities carries full administrator-level permissions.<sup>[6]</sup> These are not edge cases. They are the architecture of the average enterprise cloud environment, accumulated through years of permissive provisioning and insufficient lifecycle governance.</p><h2>Why Existing IAM Programmes Fail Here</h2><p>The core problem is architectural. Traditional IAM was designed around a set of assumptions that NHIs violate systematically.</p><p>Human IAM assumes predictable behaviour: employees work defined hours, access systems from consistent locations, and interact with technology in recognisable patterns. Anomaly detection, behavioural baselines, and privileged access review cycles are all calibrated to these patterns. NHIs break every one of them &#8212; legitimately. A service account that authenticates thousands of times per minute from multiple IP ranges is not compromised; that is its normal operational profile.</p><p>Human IAM assumes a defined lifecycle: onboarding creates the identity, role changes update permissions, offboarding removes access. NHIs are provisioned dynamically, often by developers building pipelines or deploying agents, without equivalent offboarding discipline. The result is credential sprawl: active credentials attached to decommissioned services, orphaned API keys retaining production access, and service accounts whose owners have long since left the organisation.</p><p>Human IAM assumes visibility: your HR system feeds your identity provider, which feeds your access management tooling. NHIs proliferate across cloud environments, SaaS platforms, and CI/CD pipelines in ways that no single system of record captures. As the Cloud Security Alliance notes, the challenge is not merely governance &#8212; it is that most organisations do not have a complete inventory of what NHIs exist in their environment, let alone what they are permitted to do.<sup>[7]</sup></p><p>The KuppingerCole Leadership Compass on Non-Human Identity Management puts it plainly: traditional IAM and PAM systems were not designed to handle the dynamic and large-scale nature of these entities. They do not adequately address the security complexities of device and system accounts, and this gap in governance not only poses significant security risks but also hampers operational agility.<sup>[8]</sup></p><p>The arrival of agentic AI compounds this in a specific and important way. AI agents are not static service accounts. They are autonomous systems that can spawn sub-agents, mint new credentials, and acquire permissions dynamically as they execute multi-step tasks. They operate at machine speed across APIs, databases, and SaaS integrations &#8212; often without a human in the loop. The blast radius of a compromised agent credential is not the agent itself; it is every system the agent has permission to touch, and every downstream agent it can reach.</p><h2>The Regulatory Exposure</h2><p>For NIS2-covered entities, NHI governance is not a future compliance consideration. It is a current one.</p><p>Article 21(2)(i) of the NIS2 Directive explicitly requires the use of multi-factor authentication or continuous authentication solutions, and policies and procedures regarding cryptography and encryption.<sup>[9]</sup> By design, NHIs cannot satisfy MFA requirements &#8212; they are non-human, and authentication mechanisms for machine identities operate differently. The compliance question is whether your organisation has equivalent controls in place for NHIs: short-lived credentials, certificate-based authentication, zero standing privilege architectures, and automated rotation policies. Most organisations do not.</p><p>Article 21(2)(d) requires access control and asset management. Applied to NHIs, this means maintaining an inventory of machine identities, assigning accountable human owners to each credential, and enforcing least-privilege access. The 78% of organisations that lack formal NHI provisioning and deprovisioning policies are, in operational terms, running an access control gap of significant regulatory consequence.<sup>[10]</sup></p><p>The AI Act introduces a parallel and reinforcing obligation. From 2 August 2026, full compliance requirements apply to Annex III high-risk AI systems &#8212; which include AI used in critical infrastructure, essential services, and employment decisions.<sup>[11]</sup> Article 9 of the AI Act mandates a continuous, documented risk management process throughout the AI system lifecycle. For organisations deploying agentic AI in NIS2-covered sectors, the intersection of these two frameworks creates a dual governance obligation: NIS2 governs the security of the network and information systems the agents operate across; the AI Act governs the agents themselves. Neither framework currently exempts the other&#8217;s obligations.</p><p>The governance gap is therefore not merely a security programme problem. It is a compliance programme problem with two distinct regulatory clocks running simultaneously.</p><h2>What Boards Are Not Being Told</h2><p>The board-level conversation about AI has, for most organisations, focused on productivity, risk appetite, and governance frameworks for AI adoption. It has not yet adequately addressed the identity implications of the AI systems already deployed.</p><p>Boards understand &#8212; at least in principle &#8212; that employees using unauthorised AI tools creates Shadow AI risk. What they are rarely told is that the AI tools they have authorised are generating their own credentials, acquiring their own permissions, and operating in ways that existing security controls cannot see or constrain.</p><p>The IANS Research report on identity security heading into 2026 ranked &#8216;Identity Assurance for an AI World&#8217; as the second-highest CISO priority, scoring 4.46 out of 5.<sup>[12]</sup> Yet the same research found a significant gap between recognition of the problem and organisational capacity to address it &#8212; with IAM investment as a proportion of security budget varying from 6% at smaller organisations to 12% at large enterprises, and manual processes still dominant across the market.</p><p>The strategic board argument is this: the organisation is creating a new class of digital actor &#8212; AI agents &#8212; that inherits enterprise access, operates at machine speed, and sits outside the governance model that applies to human access. Each agent represents a credential that, if compromised, provides an attacker with persistent access equivalent to the agent&#8217;s permissions, without triggering human-centric security controls. The attack is not sophisticated. The access it provides is.</p><p>As BeyondTrust&#8217;s analysis frames it, the mandate for the security function is to shift the question from &#8216;Who has access?&#8217; to &#8216;What has access to what?&#8217;<sup>[13]</sup> Boards should be asking the same question.</p><h2>What CISOs Should Do Now</h2><p>The NHI governance problem does not require a new technology investment as the first step. It requires a governance posture shift, followed by targeted tooling to operationalise it.</p><p>&#8226; <strong>Inventory before you govern. </strong>You cannot apply policy to identities you cannot see. The starting point is a comprehensive discovery exercise across cloud environments, SaaS platforms, CI/CD pipelines, and collaboration tools. This is not a one-time exercise &#8212; NHI creation is continuous, and discovery must be too.</p><p>&#8226; <strong>Assign human ownership to every NHI. </strong>Every machine credential should have a named human accountable for its existence, its permissions, and its deprovisioning. Orphaned credentials &#8212; those whose owners have left or whose originating service has been decommissioned &#8212; are the most common source of standing privilege that attackers exploit.</p><p>&#8226; <strong>Apply least privilege architecture to machine identities. </strong>The one-in-twenty AWS machine identities carrying full administrator access is not an outlier &#8212; it is the result of permissive defaults and insufficient review cycles. NHI privilege should be scoped to specific tasks, time-limited where possible, and reviewed on the same cadence as privileged human access.</p><p>&#8226; <strong>Design for zero standing privilege for AI agents. </strong>AI agents should receive elevated permissions only for the duration of the specific task that requires them, with automatic revocation on completion. This is not a theoretical architecture &#8212; it is the practical application of least privilege to autonomous systems operating at machine speed.</p><p>&#8226; <strong>Brief the board on the regulatory intersection. </strong>NIS2 Article 21 access control obligations and the AI Act&#8217;s Article 9 risk management requirements together create a documented governance obligation for NHI security. The board should understand that the AI it has authorised sits within a regulatory perimeter that includes identity governance &#8212; and that the compliance gap, if not addressed, carries both supervisory and personal liability risk under NIS2 Article 20.</p><p>&#8226; <strong>Treat NHI governance as the next phase of your Shadow AI programme. </strong>Shadow AI was about employees creating ungoverned AI usage. NHI proliferation is about authorised AI systems creating ungoverned identity sprawl. The governance logic is the same; the technical response needs to evolve to match.</p><h2>The Governance Moment</h2><p>Every major shift in enterprise IT architecture has eventually created an identity crisis &#8212; not in the existential sense, but in the governance sense. Cloud migration multiplied machine identities beyond what on-premise IAM could track. SaaS adoption distributed credentials across platforms that no single identity provider controlled. Each time, the security function ran behind the architecture, governing the previous model while the new one accumulated ungoverned exposure.</p><p>Agentic AI is the next instance of this pattern. The difference is that AI agents do not merely hold credentials passively &#8212; they use them autonomously, at speed, across systems, making decisions with real operational and regulatory consequences. The blast radius of a governance failure is no longer measured in data records exposed; it is measured in automated actions taken.</p><p>The organisations that will navigate this well are not those with the most sophisticated AI adoption. They are those that treated identity governance as a foundational control rather than a supporting function &#8212; and that extended that governance to non-human actors before the incidents made the case for them.</p><p>Your IAM programme was built for humans. The question is whether you will extend it to agents before an attacker does it for you.</p><p><strong>References</strong></p><p><strong>[1] </strong><a href="https://www.cybersecuritytribe.com/news/research-reveals-44-growth-in-nhis-from-2024-to-2025">Entro Labs &#8212; NHI &amp; Secrets Risk Report H1 2025</a></p><p><strong>[2] </strong><a href="https://cloudsecurityalliance.org/artifacts/state-of-nhi-and-ai-security-survey-report">CSA &#8212; State of Non-Human Identity and AI Security Survey Report</a></p><p><strong>[3] </strong><a href="https://cloudsecurityalliance.org/artifacts/state-of-nhi-and-ai-security-survey-report">CSA &#8212; State of Non-Human Identity and AI Security Survey Report</a></p><p><strong>[4] </strong><a href="https://www.obsidiansecurity.com/blog/what-are-non-human-identities-nhi-security-guide">Obsidian Security &#8212; What Are Non-Human Identities: NHI Security Guide</a></p><p><strong>[5] </strong><a href="https://www.cybersecuritytribe.com/news/research-reveals-44-growth-in-nhis-from-2024-to-2025">Entro Labs &#8212; NHI &amp; Secrets Risk Report H1 2025</a></p><p><strong>[6] </strong><a href="https://www.cybersecuritytribe.com/news/research-reveals-44-growth-in-nhis-from-2024-to-2025">Entro Labs &#8212; NHI &amp; Secrets Risk Report H1 2025</a></p><p><strong>[7] </strong><a href="https://cloudsecurityalliance.org/blog/2026/03/20/2026-securing-the-agentic-control-plane">CSA &#8212; Securing the Agentic Control Plane, March 2026</a></p><p><strong>[8] </strong><a href="https://www.kuppingercole.com/research/lc80974/non-human-identity-management">KuppingerCole &#8212; Leadership Compass: Non-Human Identity Management</a></p><p><strong>[9] </strong><a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022L2555">Directive (EU) 2022/2555 &#8212; NIS2 Directive, Article 21(2)(i)</a></p><p><strong>[10] </strong><a href="https://www.msspalert.com/news/security-teams-mssps-will-wrestle-with-agentic-ai-non-human-identities-in-2026">MSSP Alert &#8212; Security Teams Will Wrestle with Agentic AI, Non-Human Identities in 2026</a></p><p><strong>[11] </strong><a href="https://artificialintelligenceact.eu/annex/3/">EU AI Act &#8212; Regulation (EU) 2024/1689, Article 9 and Annex III</a></p><p><strong>[12] </strong><a href="https://www.iansresearch.com/resources/all-blogs/post/security-blog/2026/02/24/ai-agents-are-creating-an-identity-security-crisis-in-2026">IANS Research &#8212; AI Agents Are Creating an Identity Security Crisis in 2026</a></p><p><strong>[13] <a href="https://www.beyondtrust.com/blog/entry/preventing-shadow-ai-and-nhi-risk">BeyondTrust &#8212; Agentic AI Security: Securing Shadow AI &amp; Non-Human Identities</a></strong></p><p></p>]]></content:encoded></item><item><title><![CDATA[AI Governance Without the CISO Is Just Innovation Theatre]]></title><description><![CDATA[If your AI governance committee doesn&#8217;t include your CISO, it isn&#8217;t a governance committee &#8212; it&#8217;s a project team with a compliance-shaped blind spot.]]></description><link>https://www.varthalitis.eu/p/ai-governance-without-the-ciso-is</link><guid isPermaLink="false">https://www.varthalitis.eu/p/ai-governance-without-the-ciso-is</guid><dc:creator><![CDATA[Angelos Varthalitis]]></dc:creator><pubDate>Thu, 19 Mar 2026 07:20:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kewH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f8fed15-b38f-4cd5-a0b6-18a449c461f7_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kewH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f8fed15-b38f-4cd5-a0b6-18a449c461f7_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kewH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f8fed15-b38f-4cd5-a0b6-18a449c461f7_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!kewH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f8fed15-b38f-4cd5-a0b6-18a449c461f7_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!kewH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f8fed15-b38f-4cd5-a0b6-18a449c461f7_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!kewH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f8fed15-b38f-4cd5-a0b6-18a449c461f7_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kewH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f8fed15-b38f-4cd5-a0b6-18a449c461f7_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3f8fed15-b38f-4cd5-a0b6-18a449c461f7_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3020232,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.varthalitis.eu/i/191450759?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f8fed15-b38f-4cd5-a0b6-18a449c461f7_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kewH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f8fed15-b38f-4cd5-a0b6-18a449c461f7_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!kewH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f8fed15-b38f-4cd5-a0b6-18a449c461f7_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!kewH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f8fed15-b38f-4cd5-a0b6-18a449c461f7_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!kewH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f8fed15-b38f-4cd5-a0b6-18a449c461f7_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Every enterprise wants to say it governs AI responsibly. Most of them are performing the appearance of governance while structurally excluding the one function equipped to operationalise it.</p><p>Across industries, organisations are standing up AI governance committees, drafting usage policies, and appointing oversight leads. The energy is real. The problem is who&#8217;s at the table &#8212; and who isn&#8217;t. In too many cases, AI governance is being led by Chief Data Officers, Chief Technology Officers, or innovation teams. Security leadership, when included at all, is added as an afterthought: a reviewer, not an architect.</p><p>The result is governance frameworks that look compelling on slides but have no operational teeth. They address ethics, bias, and transparency &#8212; all important &#8212; but skip the controls that determine whether an AI system is actually safe to run in production: identity management, data classification, access governance, incident response, and regulatory exposure mapping. These are not theoretical concerns. They are the CISO&#8217;s daily operating reality.</p><p>This matters now more than ever. The EU AI Act&#8217;s high-risk obligations become enforceable in August 2026, requiring risk management systems, technical documentation, human oversight, and cybersecurity controls for AI systems operating in sensitive domains.<sup>[1]</sup></p><p>NIS2&#8217;s management accountability provisions are already active across transposed jurisdictions. And the threat landscape is moving faster than any governance committee&#8217;s quarterly meeting cadence.</p><h2>The Structural Exclusion Problem</h2><p>Half of organisations have now established dedicated AI governance committees.<sup>[2]</sup></p><p>That sounds like progress &#8212; until you examine who sits on them. The typical composition includes legal, compliance, data, engineering, and business leadership. Security is frequently listed as a &#8220;stakeholder&#8221; rather than a co-owner. This mirrors a pattern familiar from cloud adoption a decade ago: business units moved to the cloud, IT enabled it, and security was invited to review the architecture after procurement was complete.</p><p>The consequences were predictable then, and they are predictable now. When AI governance is structured without security co-ownership, you get usage policies without enforcement mechanisms, data processing agreements without classification controls, risk assessments that consider reputational and ethical risk but not operational or cyber risk, and vendor evaluations that assess capability but not the security posture of the AI supply chain.</p><p>The governance ownership question remains unresolved for many organisations. As one analysis put it, accountability is fragmented across the CTO, CDO, CRO, and CISO &#8212; and when everyone assumes someone else is responsible, no one actually governs.<sup>[3]</sup></p><h2>Shadow AI: The Governance Gap Made Visible</h2><p>If there is a single metric that exposes the gap between governance theatre and operational governance, it is the scale of Shadow AI.</p><p>Ninety-eight percent of organisations report employees using unsanctioned AI applications.<sup>[4]</sup></p><p>Three out of four CISOs have discovered unapproved GenAI tools already running in their environments, often with embedded credentials, API integrations, or OAuth tokens that connect directly into enterprise systems.<sup>[5]</sup></p><p>Nearly half of employees admit to adopting AI tools without employer approval, with a significant proportion sharing sensitive enterprise data including research datasets, employee records, and financial information with consumer-grade tools.<sup>[6]</sup></p><p>This is not an awareness problem. It is a structural one. Employees use unsanctioned AI tools because the organisation has not provided governed alternatives fast enough. When approved tools are made available with appropriate guardrails, unauthorised usage drops dramatically &#8212; one healthcare system reported an 89% reduction after deploying sanctioned alternatives.<sup>[7]</sup></p><p>Shadow AI is not evidence of rogue employees. It is evidence of a governance model that excluded the function responsible for managing exactly this kind of risk: the CISO.</p><h2>The AI Identity Crisis Security Leaders Are Already Managing</h2><p>Beyond Shadow AI, there is a deeper operational challenge that AI governance committees without security representation are not equipped to address: AI systems now function as identities within enterprise environments.</p><p>Seventy-one percent of organisations report that AI tools have access to core business systems such as CRM platforms and ERP systems. But only 16% say that access is governed effectively.<sup>[8]</sup></p><p>Nearly half of security leaders have already observed AI agents exhibiting unintended or unauthorised behaviour, and a third have dealt with an actual security incident or near-miss in the past year tied to AI systems.<sup>[9]</sup></p><p>These AI identities operate at machine speed, with privilege levels that in many cases no one explicitly granted. They don&#8217;t follow human access patterns. They don&#8217;t leave conventional audit trails. And they create a class of insider risk that traditional governance frameworks were never designed to handle.</p><p>This is the CISO&#8217;s domain. Identity governance, privilege management, access control, and insider threat detection are core security competencies. An AI governance committee that does not include the CISO is, by definition, ungoverned on the dimension that determines whether AI systems are operationally safe.</p><h2>The Regulatory Convergence the C-Suite Hasn&#8217;t Mapped</h2><p>The EU AI Act, NIS2, and GDPR are converging on a single operational reality: organisations deploying AI systems in Europe face overlapping governance obligations that no single function can manage alone.</p><p>The EU AI Act&#8217;s high-risk obligations, enforceable from August 2026, require risk management systems, data governance, technical documentation, human oversight, and &#8212; critically &#8212; cybersecurity and robustness requirements for AI systems operating in sensitive domains.<sup>[10]</sup></p><p>NIS2 already requires management bodies to approve and oversee cybersecurity risk management measures, with provisions for personal liability in cases of serious negligence. When the AI systems an organisation deploys become vectors for cyber risk &#8212; through data exposure, privilege escalation, or supply chain compromise &#8212; the question of who approved and oversaw their deployment becomes a board-level liability question.</p><p>GDPR&#8217;s requirements around automated decision-making, data protection impact assessments, and lawful basis for processing all intersect with AI deployment. The CISO is typically the function that operationalises data protection controls and coordinates with the DPO.</p><p>No CDO or CTO, however capable, can single-handedly navigate this convergence. It requires the security leader who understands how controls are implemented, how incidents are detected and reported, and how regulatory exposure translates into operational risk.</p><h2>What Governance Without the CISO Actually Looks Like</h2><p>The pattern is consistent. Organisations that exclude the CISO from AI governance produce frameworks that are structurally incomplete:</p><p>They draft AI usage policies that define acceptable use but lack monitoring or enforcement capabilities. They conduct vendor risk assessments for AI tools that evaluate functionality and data processing agreements but not the security architecture of the model&#8217;s supply chain. They build AI risk registers that capture ethical and reputational risk but not cyber risk scenarios: prompt injection, data poisoning, model exfiltration, or credential compromise via AI integrations. They approve AI deployments without identity governance, access controls, or incident response playbooks specific to AI-related incidents.</p><p>The board receives a governance dashboard that shows committee activity, policy coverage, and adoption metrics. What it does not show is whether the AI systems running in the enterprise are secure, whether their access is governed, or whether the organisation could detect and respond to an AI-related breach within its regulatory reporting timelines.</p><p>That gap is not a CISO problem. It is a board-level risk that the CISO has been structurally prevented from addressing.</p><h2>The Boardroom Alignment Problem</h2><p>There is a growing disconnection between how boards perceive AI governance maturity and the reality on the ground.</p><p>A survey of Fortune 500 executives found that 70% of organisations now have AI risk committees and 41% have dedicated AI governance teams &#8212; yet only 14% report being fully ready for AI deployment.<sup>[11]</sup></p><p>Proofpoint&#8217;s 2025 Voice of the CISO report showed that boardroom alignment with CISOs declined from 84% in 2024 to 64%, even as boards elevated business valuation as their top concern following a cyberattack.<sup>[12]</sup></p><p>The WEF Global Cybersecurity Outlook 2026 identified a telling perception gap: CEOs now rank AI vulnerabilities as their second-highest cyber risk, while CISOs do not even list it in their top three.<sup>[13]</sup></p><p>This is not a contradiction. It reflects the fact that CISOs are managing AI risk operationally &#8212; through identity controls, access governance, and incident response &#8212; while boards are experiencing it as a strategic abstraction. When CISOs are excluded from the governance structure that frames AI risk for the board, both sides lose: the board gets an incomplete picture, and the CISO loses the mandate to act.</p><h2>What Operational AI Governance Actually Requires</h2><p>Effective AI governance is not a committee. It is an operational capability that must be embedded across the enterprise. And it requires the CISO as a co-architect, not a reviewer.</p><p>AI identity governance should treat AI agents, copilots, and automated systems as non-human identities subject to the same lifecycle controls as human users: provisioning, least-privilege enforcement, access certification, and deprovisioning. This is identity and access management. It belongs to the CISO.</p><p>AI-specific incident response requires playbooks that address prompt injection, data exfiltration via AI tools, model manipulation, and the disclosure obligations that arise when AI systems process personal data or operate in NIS2-regulated environments. This is incident response. It belongs to the CISO.</p><p>AI supply chain security means evaluating not just the vendor&#8217;s data processing agreements but their model provenance, training data practices, API security, and the third-party components embedded in their AI stack. This is supply chain risk management. It belongs to the CISO.</p><p>AI risk quantification requires translating AI-specific threats into the same risk language the board uses for every other enterprise risk: financial exposure, regulatory penalty, operational disruption, and reputational impact. This is cyber risk communication. It belongs to the CISO.</p><p>None of this can happen from outside the governance structure.</p><h2>What CISOs Should Do Now</h2><p>The challenge for CISOs is not to wait for an invitation. It is to demonstrate that AI governance without security co-ownership is incomplete by design &#8212; and to present the board with a clear, risk-based argument for inclusion.</p><p>Claim the AI identity governance mandate. If AI systems are accessing enterprise data and business-critical applications, they fall within the CISO&#8217;s existing scope. Document the AI identities operating in your environment, their access levels, and the gaps in their governance. Present this to the board as an operational risk, not a technology update.</p><p>Map the regulatory convergence. Show the board where the EU AI Act, NIS2, and GDPR obligations overlap on AI governance and where the current governance structure has no owner for those intersections. Make the accountability gap visible.</p><p>Build AI-specific incident response capability now. Do not wait for a governance committee to authorise it. Develop playbooks for AI-related incidents, run tabletop exercises, and ensure your team can detect and respond to AI-specific threats within your existing regulatory reporting timelines.</p><p>Reframe the conversation. The CISO&#8217;s role in AI governance is not to slow down innovation. It is to ensure that AI deployments are operationally safe, legally defensible, and resilient. The argument is not &#8220;you need security approval&#8221; &#8212; it is &#8220;you need someone who can tell the board whether these systems are actually governed.&#8221;</p><h2>The Real Test</h2><p>The organisations that will navigate the AI governance challenge successfully are not those with the most impressive committee structures or the most comprehensive policy documents. They are the ones that have embedded security leadership into AI governance from the beginning &#8212; not as a checkbox, but as an operational co-owner.</p><p>AI governance without the CISO is not governance. It is innovation theatre: a performance of oversight that leaves the organisation structurally exposed to the risks that governance was supposed to manage.</p><p>The stage is set. The question is whether CISOs will be in the room where the decisions are made &#8212; or left to manage the consequences of decisions made without them.</p><h2>References</h2><p><strong>[1] </strong><a href="https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai">European Commission &#8212; AI Act Implementation Timeline</a></p><p><strong>[2] </strong><a href="https://www.iansresearch.com/resources/all-blogs/post/security-blog/2026/02/06/the-cisos-expanding-ai-mandate--leading-governance-in-2026">IANS Research &#8212; The CISO&#8217;s Expanding AI Mandate: Leading Governance in 2026</a></p><p><strong>[3] </strong><a href="https://medium.com/@senguptasantanu/ai-governance-in-2026-why-structure-doesnt-equal-capability-f1d9895fbd8e">Santanu Sengupta &#8212; AI Governance in 2026: Why Structure Doesn&#8217;t Equal Capability (Medium, January 2026)</a></p><p><strong>[4] </strong><a href="https://www.vectra.ai/topics/shadow-ai">Vectra AI &#8212; Shadow AI: Risks, Costs, and Enterprise Governance (2026)</a></p><p><strong>[5] </strong><a href="https://www.cybersecurity-insiders.com/2026-ciso-ai-risk-report/">Saviynt / Cybersecurity Insiders &#8212; 2026 CISO AI Risk Report</a></p><p><strong>[6] </strong><a href="https://www.cio.com/article/4124760/roughly-half-of-employees-are-using-unsanctioned-ai-tools-and-enterprise-leaders-are-major-culprits.html">CIO.com &#8212; Roughly Half of Employees Are Using Unsanctioned AI Tools (BlackFog Survey, January 2026)</a></p><p><strong>[7] </strong><a href="https://www.vectra.ai/topics/shadow-ai">Vectra AI &#8212; Shadow AI: Risks, Costs, and Enterprise Governance (Healthcare Brew / 2026 data)</a></p><p><strong>[8] </strong><a href="https://www.cybersecurity-insiders.com/2026-ciso-ai-risk-report/">Saviynt / Cybersecurity Insiders &#8212; 2026 CISO AI Risk Report</a></p><p><strong>[9] </strong><a href="https://www.cybersecurity-insiders.com/2026-ciso-ai-risk-report/">Saviynt / Cybersecurity Insiders &#8212; 2026 CISO AI Risk Report</a></p><p><strong>[10] </strong><a href="https://securiti.ai/whitepapers/eu-ai-act-what-changes-now-what-waits-2026/">Securiti &#8212; EU AI Act: What Changes Now vs What Starts in 2026</a></p><p><strong>[11] </strong><a href="https://medium.com/@senguptasantanu/ai-governance-in-2026-why-structure-doesnt-equal-capability-f1d9895fbd8e">Sedgwick 2026 Forecasting Report, cited in Sengupta, AI Governance in 2026 (Medium, January 2026)</a></p><p><strong>[12] </strong><a href="https://www.proofpoint.com/us/newsroom/press-releases/proofpoint-2025-voice-ciso-report">Proofpoint &#8212; 2025 Voice of the CISO Report</a></p><p><strong>[13] </strong><a href="https://www.fortinet.com/blog/ciso-collective/world-economic-forum-global-cybersecurity-outlook-2026-key-takeaways-for-cisos">World Economic Forum &#8212; Global Cybersecurity Outlook 2026 (via Fortinet CISO Collective)</a></p>]]></content:encoded></item><item><title><![CDATA[Wiper Malware, Broken Laws, and the EU's Big Regulatory Reset]]></title><description><![CDATA[Five stories shaping European cybersecurity this month &#8212; and what they mean for your business]]></description><link>https://www.varthalitis.eu/p/wiper-malware-broken-laws-and-the</link><guid isPermaLink="false">https://www.varthalitis.eu/p/wiper-malware-broken-laws-and-the</guid><dc:creator><![CDATA[Angelos Varthalitis]]></dc:creator><pubDate>Mon, 16 Mar 2026 08:23:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!_Tx7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed6fa8a2-1150-4a39-a94c-2e6ea42b9e24_1792x1024.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_Tx7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed6fa8a2-1150-4a39-a94c-2e6ea42b9e24_1792x1024.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_Tx7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed6fa8a2-1150-4a39-a94c-2e6ea42b9e24_1792x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!_Tx7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed6fa8a2-1150-4a39-a94c-2e6ea42b9e24_1792x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!_Tx7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed6fa8a2-1150-4a39-a94c-2e6ea42b9e24_1792x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!_Tx7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed6fa8a2-1150-4a39-a94c-2e6ea42b9e24_1792x1024.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_Tx7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed6fa8a2-1150-4a39-a94c-2e6ea42b9e24_1792x1024.heic" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ed6fa8a2-1150-4a39-a94c-2e6ea42b9e24_1792x1024.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:445228,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.varthalitis.eu/i/191104875?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed6fa8a2-1150-4a39-a94c-2e6ea42b9e24_1792x1024.heic&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_Tx7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed6fa8a2-1150-4a39-a94c-2e6ea42b9e24_1792x1024.heic 424w, https://substackcdn.com/image/fetch/$s_!_Tx7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed6fa8a2-1150-4a39-a94c-2e6ea42b9e24_1792x1024.heic 848w, https://substackcdn.com/image/fetch/$s_!_Tx7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed6fa8a2-1150-4a39-a94c-2e6ea42b9e24_1792x1024.heic 1272w, https://substackcdn.com/image/fetch/$s_!_Tx7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed6fa8a2-1150-4a39-a94c-2e6ea42b9e24_1792x1024.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>March 2026 has been a busy month in European cybersecurity. A medical technology giant was hit by a wiper attack that wiped tens of thousands of devices. The EU proposed amendments that could ease the NIS2 compliance burden for small businesses. Germany&#8217;s registration deadline is weeks away. And a string of data breaches &#8212; including one at the European Commission itself &#8212; reminded us that no organisation is immune.</p><p>Here is what happened, why it matters, and what you should be doing about it.</p><p><strong>1. Stryker: The Attack That Proves Wiper Malware Is Now a Business Risk</strong></p><p>On 11 March 2026, Stryker &#8212; one of the world&#8217;s largest medical technology companies &#8212; suffered a devastating cyberattack. Iranian-linked hackers using the group name Handala deployed wiper malware across the company&#8217;s global network, permanently erasing data from tens of thousands of devices.</p><p>This was not ransomware. The attackers did not want money. They wanted destruction.</p><p>The impact at Stryker&#8217;s Cork, Ireland headquarters alone was severe: over 5,500 employees were unable to work, with engineering, product design, and manufacturing operations brought to a standstill. Corporate laptops, mobile phones enrolled in Microsoft Intune, and servers running proprietary applications were wiped or rendered inoperable. Login screens were defaced. Remote device management systems were turned against the company they were meant to protect.</p><p>For supply chain professionals and risk managers, the Stryker attack carries a critical lesson: the impact of a cyberattack on a large enterprise cascades immediately to its SME suppliers and partners. Companies relying on Stryker for components, instruments, or services faced disruption with no warning and no control.</p><p>NIS2 explicitly requires organisations to assess and manage cybersecurity risk across their supply chains. The Stryker incident is a textbook example of why that obligation exists &#8212; and what happens when it is not taken seriously across the ecosystem.</p><blockquote><p><strong>&#8594; </strong><a href="https://cybersecuritynews.com/stryker-cyber-attack/">CyberSecurityNews &#8211; Stryker Cyber Attack: Iran-Linked Hackers, Wiper Malware, Cork HQ Impact</a></p><p><strong>&#8594; </strong><a href="https://www.dbdigest.com/2026/03/data-breaches-digest-week-11-2026.html">Data Breaches Digest &#8211; Week 11, 2026: Full Incident Roundup</a></p></blockquote><p><strong>2. The EU Proposes NIS2 Simplification &#8212; Including a New SME Category</strong></p><p>On 20 January 2026, as part of a broader cybersecurity legislative package, the European Commission proposed targeted amendments to the NIS2 Directive. The goal: reduce compliance complexity, increase legal clarity, and &#8212; notably &#8212; ease the burden on smaller businesses.</p><p>The headline change for SMEs is the proposed introduction of a new &#8216;small mid-cap&#8217; enterprise category. This would create a more proportionate compliance pathway for companies that currently fall under NIS2 scope but lack the resources of large enterprises. The Commission estimates that the amendments will ease compliance for approximately 28,700 companies, including 6,200 micro and small-sized enterprises.</p><p>Other proposed amendments include:</p><blockquote><p><strong>&#8226; </strong>Streamlined jurisdictional rules for cross-border entities</p><p><strong>&#8226; </strong>Simplified ransomware data collection requirements</p><p><strong>&#8226; </strong>A strengthened coordinating role for ENISA</p><p><strong>&#8226; </strong>Integration with the Digital Omnibus &#8216;single-entry-point&#8217; for incident reporting</p></blockquote><p>Once adopted, Member States will have one year to transpose the amended provisions. The proposal is now under negotiation, and political agreement is expected later in 2026.</p><p><em>For SMEs already struggling with the complexity of NIS2, these amendments are a signal that the Commission has heard the feedback &#8212; even if relief is still months away.</em></p><p>Importantly, the proposed amendments do not pause existing obligations. If your country has already transposed NIS2, the current rules apply now. The simplification measures, when they arrive, will refine &#8212; not replace &#8212; what is already on the books.</p><blockquote><p><strong>&#8594; </strong><a href="https://digital-strategy.ec.europa.eu/en/policies/nis2-directive">European Commission &#8211; Proposal for Targeted Amendments to the NIS2 Directive (January 2026)</a></p><p><strong>&#8594; </strong><a href="https://www.insideprivacy.com/european-union-2/what-to-watch-in-2026-key-eu-privacy-cybersecurity-developments/">Inside Privacy &#8211; What to Watch in 2026: Key EU Privacy &amp; Cybersecurity Developments</a></p></blockquote><p><strong>3. NIS2 Transposition: Germany&#8217;s April Deadline Is Now Weeks Away</strong></p><p>Germany completed its NIS2 implementation on 6 December 2025, when the revised BSI Act entered into force &#8212; well over a year after the EU&#8217;s October 2024 deadline. That delay is now behind us. What matters now is the compliance clock that started ticking the moment the law took effect.</p><p>German-registered entities that fall within NIS2 scope must register with the Federal Office for Information Security (BSI) within three months of the Act&#8217;s entry into force. That deadline is April 2026 &#8212; weeks away. Beyond registration, the revised BSI Act introduces:</p><blockquote><p><strong>&#8226; Personal liability for management: </strong>Under Section 38 of the BSI Act, members of management bodies are personally accountable for approving and overseeing cybersecurity risk management measures.</p><p><strong>&#8226; Significant fines: </strong>Particularly important entities face fines up to &#8364;10 million or 2% of global annual turnover. Important entities face up to &#8364;7 million or 1.4% of global turnover.</p><p><strong>&#8226; Expanded BSI enforcement powers: </strong>Including broad inspection rights, binding orders, and detailed documentation requirements.</p></blockquote><p>Germany&#8217;s approach is stricter than the baseline NIS2 Directive in several ways &#8212; most notably in how it layers new NIS2 obligations on top of the existing KRITIS framework, creating a more granular and demanding scoping model.</p><p>Across the EU more broadly, transposition is accelerating. Germany, Portugal, and Austria have all recently adopted national implementing legislation, while Spain, France, and Poland are nearing completion. As of early 2026, the majority of Member States have transposed NIS2 &#8212; meaning enforcement activity will follow.</p><blockquote><p><strong>&#8594; </strong><a href="https://www.mofo.com/resources/insights/251208-flipping-the-nis2-switch-what-germanys-implementation">Morrison Foerster &#8211; Flipping the NIS2 Switch: What Germany&#8217;s Implementation Means for 2026 Compliance</a></p><p><strong>&#8594; </strong><a href="https://www.twobirds.com/en/insights/2025/european-cybersecurity-regulatory-update-nis2-and-beyond">Bird &amp; Bird &#8211; European Cybersecurity Regulatory Update: NIS2 and Beyond</a></p></blockquote><p><strong>4. Cyber Resilience Act: Six Months to the First Hard Deadline</strong></p><p>The Cyber Resilience Act (CRA) entered into force in December 2024, and for most organisations it has felt abstract &#8212; something to think about later. That window is closing.</p><p>From 11 September 2026, manufacturers of products with digital elements must comply with the CRA&#8217;s vulnerability reporting requirements. This is the first hard, enforceable obligation under the Act. It requires manufacturers to actively report exploited vulnerabilities and serious cybersecurity incidents to ENISA and to the relevant national authority &#8212; within tight timelines.</p><p>The broader compliance picture looks like this:</p><blockquote><p><strong>&#8226; 11 September 2026: </strong>Vulnerability and incident reporting obligations begin</p><p><strong>&#8226; 11 December 2027: </strong>Full CRA compliance required &#8212; security-by-design, conformity assessments, CE marking, and technical documentation</p></blockquote><p>For SMEs that develop or sell software, hardware, or connected products in the EU market, the CRA is not optional. It applies regardless of whether the manufacturer is based inside or outside the EU &#8212; what matters is whether the product is placed on the EU market.</p><p><em>Six months is not long when you factor in supply chain audits, secure development process reviews, and the documentation trail regulators will expect to see.</em></p><p>The 10th Cybersecurity Standardization Conference, held in Brussels on 12 March 2026 and co-hosted by CEN, CENELEC, ETSI, and ENISA, dedicated significant discussion to the timeline of CRA standards and how they interact with NIS2, DORA, and eIDAS. The message from regulators and industry was consistent: begin preparing now, because the December 2027 deadline requires foundational work that cannot be done in the final months.</p><blockquote><p><strong>&#8594; </strong><a href="https://www.cencenelec.eu/news-events/news/2026/brief-news/2026-03-13-cybersecurity-conference-2026/">CEN-CENELEC &#8211; Highlights from the 10th Cybersecurity Standardization Conference, March 2026</a></p><p><strong>&#8594; </strong><a href="https://orcwg.org/cra/">Open Regulatory Compliance Working Group &#8211; The EU Cyber Resilience Act: Deadlines and Requirements</a></p></blockquote><p><strong>5. When the EU Commission Gets Breached: A Reminder That No One Is Immune</strong></p><p>On 6 February 2026, the European Commission disclosed a data breach affecting staff. The incident was detected on 30 January, when its mobile device management infrastructure identified traces of unauthorised access &#8212; later linked to active exploitation of a vulnerability in Ivanti Endpoint Manager Mobile.</p><p>The same week, rail pass provider Eurail confirmed that customer data &#8212; including names, contact details, travel companion information, and passport numbers &#8212; had been stolen and offered for sale. Criminals claimed to have taken 1.3 terabytes from cloud storage and support systems, and threatened wider release if no buyer emerged.</p><p>Neither the Commission nor Eurail fits the profile of a poorly resourced organisation caught napping. Both have dedicated security functions, established processes, and regulatory obligations. Both were breached anyway.</p><p><em>If the European Commission cannot fully prevent a breach, your business certainly cannot. The goal is resilience, detection, and response &#8212; not the illusion of impermeability.</em></p><p>This is the core argument behind NIS2&#8217;s incident reporting requirements. The Directive does not assume organisations will never be breached. It assumes they will &#8212; and it requires them to have the governance structures, detection capabilities, and reporting processes in place to manage the aftermath effectively.</p><p>For SMEs, the lesson is practical: having an incident response plan is not a luxury. It is a baseline obligation under NIS2 for in-scope entities, and a basic act of business survival for everyone else.</p><blockquote><p><strong>&#8594; </strong><a href="https://www.brightdefense.com/resources/recent-data-breaches/">Bright Defense &#8211; List of Recent Data Breaches in 2026 (Eurail, EU Commission, and more)</a></p></blockquote><p><strong>What Should You Do With All of This?</strong></p><p>The five stories above are not isolated incidents. They form a pattern: the threat landscape is intensifying (Stryker, EU Commission, Eurail), regulatory enforcement is materialising (Germany&#8217;s April deadline, NIS2 transposition wave), and the legislative framework is evolving to address SME realities (NIS2 amendments, CRA implementation support).</p><p>If you run or advise a business in scope of NIS2 &#8212; or if you sell products with digital elements in the EU &#8212; here is where to focus your attention:</p><blockquote><p><strong>&#8226; Check your NIS2 status. </strong>With over 20 Member States now having transposed NIS2, the odds are that the rules apply in your country. If you are not sure whether your organisation is in scope, that question needs answering now.</p><p><strong>&#8226; Register if you are in Germany. </strong>The April 2026 BSI registration deadline is imminent. Do not miss it.</p><p><strong>&#8226; Map your supply chain. </strong>The Stryker attack is a case study in how cyber incidents spread through supplier ecosystems. NIS2 requires you to assess third-party risk. Start with your most critical vendors.</p><p><strong>&#8226; Plan for CRA. </strong>If your business develops or sells products with digital elements, the September 2026 vulnerability reporting deadline is your first hard CRA obligation. Begin building your process now.</p><p><strong>&#8226; Review your incident response plan. </strong>The EU Commission got breached via a known vulnerability type. Do you have a documented plan for detecting, containing, and reporting a significant incident?</p></blockquote><p>&#9472;&#9472;</p><p><strong>If you want a practical, jargon-free guide to working through all of this &#8212; scoping, governance, risk management, supply chain, incident response, and documentation &#8212; I wrote NIS2 Practical Compliance for SMEs exactly for this situation: business owners and directors who need to get compliant without a large security team and without a large budget.</strong></p><p><strong><a href="https://amazon.com/dp/B0GRQVQG8C">&#128214; Available on Amazon &#8594; amazon.com/dp/B0GRQVQG8C</a></strong></p><p><em>Angelos Varthalitis is a CISO with over 20 years of cybersecurity experience. He runs avsecadvisory and writes at varthalitis.eu. Views are his own.</em></p>]]></content:encoded></item><item><title><![CDATA[Europe's Cybersecurity Landscape Is Shifting — Five Developments You Cannot Afford to Miss]]></title><description><![CDATA[Regulation Is Moving. Enforcement Is Coming. Is Your Business Ready?]]></description><link>https://www.varthalitis.eu/p/europes-cybersecurity-landscape-is</link><guid isPermaLink="false">https://www.varthalitis.eu/p/europes-cybersecurity-landscape-is</guid><dc:creator><![CDATA[Angelos Varthalitis]]></dc:creator><pubDate>Thu, 12 Mar 2026 07:45:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cLT9!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6e49acf-9c42-4d01-b9e7-1a13a04d9488_400x400.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you follow EU cybersecurity regulation &#8212; or if you run a business that falls under it &#8212; the past few months have been unusually eventful. The European Commission has proposed the most significant overhaul of its cybersecurity framework since NIS2 itself. ENISA has published hard data on where the industry is genuinely struggling. And across the EU, the long-delayed enforcement machine is finally moving.</p><p>This article covers five developments that, taken together, tell a coherent story: Europe is raising the bar again, the talent to meet that bar is scarce, and the window for &#8216;wait and see&#8217; is closing.</p><h2>1. The EU&#8217;s New Cybersecurity Package: A Regulatory Upgrade Nobody Saw Coming</h2><p>On 20 January 2026, the European Commission proposed a comprehensive new cybersecurity package &#8212; a revision of the 2019 Cybersecurity Act, paired with targeted amendments to NIS2 itself. This is not a minor update.</p><p>The proposal introduces a horizontal framework for trusted ICT supply chain security &#8212; the first of its kind in EU law. It would allow the Commission and Member States to act jointly against &#8216;non-technical risks&#8217; in ICT supply chains: in plain terms, the ability to restrict suppliers from third countries that pose a strategic cybersecurity threat.</p><p><em>For the first time, the EU is treating supply chain security not just as a technical problem &#8212; but as a geopolitical one.</em></p><p>Beyond supply chain, the package significantly expands and simplifies the European Cybersecurity Certification Framework (ECCF). Organisations would be able to use certification to demonstrate compliance and obtain a presumption of conformity with NIS2 &#8212; a major simplification for entities already overwhelmed by audit requirements.</p><p>ENISA gets a budget increase of more than 75% and takes on new operational roles: managing EU-wide vulnerability databases, issuing early warnings, coordinating cybersecurity exercises, and operating a unified incident notification platform.</p><p>The package is now under negotiation, with political agreement expected later in 2026. Any adopted measures would be phased in over several years. But the direction is clear: the EU is building a more centralised, more aggressive cybersecurity architecture.</p><blockquote><p><strong>Source: </strong><a href="https://commission.europa.eu/news-and-media/news/new-measures-strengthen-cybersecurity-resilience-and-capabilities-2026-01-20_en">European Commission &#8211; New measures to strengthen cybersecurity resilience and capabilities (January 2026)</a></p><p><strong>Source: </strong><a href="https://www.mayerbrown.com/en/insights/publications/2026/02/european-commission-proposes-major-cybersecurity-package-to-strengthen-eu-cyber-resilience">Mayer Brown &#8211; European Commission Proposes Major Cybersecurity Package (February 2026)</a></p></blockquote><h2>2. ENISA&#8217;s NIS Investments 2025: The Talent Crisis Is Now Structural</h2><p>ENISA&#8217;s 6th annual NIS Investments report, published in December 2025, contains data that every CISO, policymaker, and business owner should read carefully. It surveyed 1,080 security professionals across all 27 Member States, and the picture it draws is one of a sector under serious strain.</p><p>The headline finding is a pivot from people to technology. Overall cybersecurity budgets have stabilised at around 9% of IT spending &#8212; roughly &#8364;1.5 million median across surveyed entities. But within those flat budgets, organisations are moving resources away from hiring and towards technology platforms and managed services. The reason is not strategic preference &#8212; it is necessity.</p><p><strong>The talent gap has become a structural problem. </strong>76% of organisations reported difficulty attracting cybersecurity professionals, and 71% struggle to retain them. Across the EU, the shortage is estimated at nearly 300,000 professionals. For SMEs, the numbers are starker: 59% struggle to fill cybersecurity roles.</p><p><em>The talent shortage is not a pipeline problem anymore. It is a structural constraint that is reshaping how organisations defend themselves &#8212; and not always for the better.</em></p><p>Regulatory compliance &#8212; primarily NIS2, DORA, and the CRA &#8212; is the main investment driver for 70% of organisations. This is generating real operational value: 41% credit compliance-driven investments with improving risk management, 35% with faster incident detection. Policy is working. But implementation is not keeping up.</p><p>Half of all organisations cited vulnerability and patch management as their top NIS2 implementation challenge. Nearly one in three had not conducted a cybersecurity assessment in the past 12 months. And 28% take more than three months to patch critical vulnerabilities.</p><p>The most concerning forward-looking finding: supply-chain and third-party compromises are the second highest concern for the future (47%), and SMEs &#8212; the very organisations most likely to be the entry point &#8212; report the lowest confidence in their ability to withstand a cyber incident.</p><blockquote><p><strong>Source: </strong><a href="https://www.enisa.europa.eu/publications/nis-investments-2025">ENISA &#8211; NIS Investments 2025 Main Report</a></p><p><strong>Source: </strong><a href="https://www.enisa.europa.eu/news/whats-driving-cybersecurity-investments-and-where-lie-the-challenges">ENISA &#8211; What&#8217;s Driving Cybersecurity Investments and Where Lie the Challenges</a></p></blockquote><h2>3. NIS2 Transposition: 20 Member States Down, the Rest Moving Fast</h2><p>The original NIS2 transposition deadline was October 2024. Most Member States missed it. Many organisations took this as a signal that enforcement was distant &#8212; a reasonable read at the time, but an increasingly dangerous one now.</p><p>As of January 2026, over 20 of the 27 EU Member States have formally completed NIS2 transposition into national law. Germany, Portugal, and Austria finalized their legislation in late 2025. Spain, France, and Poland are in the final stages. Ireland remains at an earlier stage, but even there, the legislative process is underway.</p><p>The pressure is coming from above. In May 2025, the European Commission issued reasoned opinions &#8212; formal legal warnings &#8212; to 19 Member States for failing to notify full transposition. The next step would have been referral to the Court of Justice of the EU. The message was received.</p><p><strong>One important nuance: </strong>national implementations vary. Some countries, including Italy and Slovenia, have extended the list of regulated sectors beyond the EU Directive&#8217;s baseline. Belgium has introduced enhanced board-level governance obligations. Others have aligned liability provisions with existing national civil law frameworks. If you operate across multiple EU jurisdictions, you need country-specific analysis &#8212; not just the Directive itself.</p><p>Enforcement activity is expected to accelerate through 2026 as national supervisory frameworks, registration systems, and entity designations fall into place. There are no major public enforcement actions under NIS2 yet &#8212; but that is a function of timing, not intent.</p><blockquote><p><strong>Source: </strong><a href="https://www.wavestone.com/en/insight/nis-2-european-countries-transposing-directive/">Wavestone &#8211; NIS 2 Directive: Transposition Status and What Companies Must Do (January 2026)</a></p><p><strong>Source: </strong><a href="https://www.goodwinlaw.com/en/insights/publications/2025/10/insights-practices-dpc-navigating-nis2-what-organisations-need-to-know">Goodwin &#8211; Navigating NIS2: What Organisations Need to Know as EU Implementation Unfolds</a></p></blockquote><h2>4. The Cyber Resilience Act: The Clock Is Running</h2><p>While most attention is focused on NIS2, a second major piece of EU cybersecurity legislation is quietly advancing toward enforcement. The Cyber Resilience Act (CRA) entered into force in December 2024 and begins applying in full from September 2026.</p><p>From 11 September 2026, the CRA imposes mandatory reporting requirements for actively exploited vulnerabilities and serious cybersecurity incidents for any product with digital elements placed on the EU market &#8212; regardless of where it was designed or manufactured.</p><p><strong>Who does this affect? </strong>Any organisation that develops or sells software, hardware, or connected products in the EU. This includes manufacturers, ISVs, and device makers across every sector. The CRA is, in scope terms, far broader than NIS2.</p><p>The European Commission is currently finalising guidance to help manufacturers and developers meet CRA requirements, with a public consultation on that guidance still ongoing. In parallel, the EU is developing conformity assessment frameworks and certification schemes tied to CRA obligations.</p><p>For SMEs that develop or sell digital products, the CRA represents a compliance challenge with a firm deadline. Unlike NIS2 &#8212; where timing varied by Member State &#8212; the CRA applies at EU level with a single, fixed date.</p><blockquote><p><strong>Source: </strong><a href="https://www.europarl.europa.eu/news/en/agenda/plenary-news/2026-01-19/10/eu-cybersecurity-act">European Parliament &#8211; EU Cybersecurity Act revision (January 2026)</a></p><p><strong>Source: </strong><a href="https://www.insideprivacy.com/european-union-2/what-to-watch-in-2026-key-eu-privacy-cybersecurity-developments/">Inside Privacy &#8211; What to Watch in 2026: Key EU Privacy &amp; Cybersecurity Developments</a></p></blockquote><h2>5. SMEs: Still the Weakest Link &#8212; and Now the Most Targeted</h2><p>Running through all four of the above stories is a common thread: SMEs are simultaneously the most exposed, the least prepared, and the most consequential element of the EU&#8217;s cybersecurity architecture.</p><p>The ENISA NIS Investments 2025 data is unambiguous on this point. SMEs consistently report the lowest confidence in their ability to prepare for, withstand, and recover from cyber incidents &#8212; across every threat category. They face the same compliance requirements as larger entities, with a fraction of the resources.</p><p>The supply chain dynamic makes this urgent. Large enterprises &#8212; the ones that are well-resourced and well-audited &#8212; increasingly depend on SME suppliers, integrators, and service providers. Attackers know this. Supply-chain and third-party compromises are now the second most feared future threat among EU organisations, at 47%.</p><p><em>The cybersecurity of the EU&#8217;s critical sectors is only as strong as the SMEs that sit in their supply chains. And right now, that chain has serious weak links.</em></p><p>The regulatory answer to this problem is clear in intent but imperfect in execution. NIS2 imposes supply chain risk management obligations on essential and important entities &#8212; meaning those larger organisations are now required to assess the security posture of their suppliers. This creates both pressure and opportunity for SMEs: pressure to demonstrate compliance, and opportunity to differentiate on security credentials.</p><p>What SMEs need most, according to the ENISA data, is accessible guidance, affordable tooling (including managed and cloud services), and practical skills development. The framework exists. The implementation support remains uneven.</p><blockquote><p><strong>Source: </strong><a href="https://www.enisa.europa.eu/publications/nis-investments-2025">ENISA &#8211; NIS Investments 2025</a></p></blockquote><h2>What This Means in Practice</h2><p>These five developments are not isolated news items. They form a coherent regulatory and operational picture:</p><blockquote><p>1. The EU is building a more centralised, more prescriptive cybersecurity framework &#8212; and it is moving faster than most organisations realise.</p><p>2. The talent and resource constraints are real, structural, and not going away &#8212; which makes the quality of your process and documentation more important, not less.</p><p>3. Enforcement is arriving market by market, with national supervisory authorities now actively building their frameworks.</p><p>4. If you are an SME &#8212; whether in scope directly or as part of a supply chain &#8212; the time to prepare is before your national authority comes looking, not after.</p></blockquote><p>If you want to understand what compliance actually looks like for a small or medium-sized business &#8212; without the jargon and without assuming you have a security team &#8212; that is exactly what I wrote <strong>NIS2 Practical Compliance for SMEs</strong> to address.</p><p>Available on Amazon: <a href="https://amazon.com/dp/B0GRQVQG8C">amazon.com/dp/B0GRQVQG8C</a></p><p><strong>About the Author</strong></p><p>Angelos Varthalitis is a CISO with over 20 years of cybersecurity experience, running <a href="https://avsecadvisory.com/">avsecadvisory</a> and writing at <a href="https://varthalitis.eu/">varthalitis.eu</a>. His work focuses on EU regulatory compliance for SMEs and non-technical decision-makers.</p>]]></content:encoded></item><item><title><![CDATA[Two Regulators, One Supply Chain]]></title><description><![CDATA[The Cyber Resilience Act creates a product-security layer that most NIS2 supply chain programmes weren't built to see.]]></description><link>https://www.varthalitis.eu/p/two-regulators-one-supply-chain</link><guid isPermaLink="false">https://www.varthalitis.eu/p/two-regulators-one-supply-chain</guid><dc:creator><![CDATA[Angelos Varthalitis]]></dc:creator><pubDate>Mon, 09 Mar 2026 20:46:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!uVtd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd29cbf26-1c5b-4536-88e0-f4319de8777f_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>The Cyber Resilience Act creates a product-security layer that most NIS2 supply chain programmes weren't built to see.</em></p><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uVtd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd29cbf26-1c5b-4536-88e0-f4319de8777f_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uVtd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd29cbf26-1c5b-4536-88e0-f4319de8777f_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!uVtd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd29cbf26-1c5b-4536-88e0-f4319de8777f_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!uVtd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd29cbf26-1c5b-4536-88e0-f4319de8777f_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!uVtd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd29cbf26-1c5b-4536-88e0-f4319de8777f_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uVtd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd29cbf26-1c5b-4536-88e0-f4319de8777f_1536x1024.png" width="1536" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d29cbf26-1c5b-4536-88e0-f4319de8777f_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:1024,&quot;width&quot;:1536,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:0,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!uVtd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd29cbf26-1c5b-4536-88e0-f4319de8777f_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!uVtd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd29cbf26-1c5b-4536-88e0-f4319de8777f_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!uVtd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd29cbf26-1c5b-4536-88e0-f4319de8777f_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!uVtd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd29cbf26-1c5b-4536-88e0-f4319de8777f_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Every CISO who has spent the past eighteen months building a NIS2 supply chain programme has been working from the same mental model: assess vendors, contractualise obligations, monitor continuously. It is the right model. As NIS2 transposition laws take effect across Member States and supervisory enforcement accelerates through 2026, that model is being tested in practice.</p><p>But the mental model has a blind spot. It frames vendor risk as a question about how suppliers <em>operate</em>. It does not ask whether the products those suppliers provide are inherently secure &#8212; and that question now has a regulator attached to it.</p><p>The Cyber Resilience Act (CRA) is a product security regulation, not an organisational one. Its obligations fall on manufacturers, importers, and distributors of products with digital elements &#8212; a category defined broadly to cover hardware and software that includes software components or network connectivity, encompassing enterprise software, connected devices, embedded systems, OT equipment, and consumer IoT. It is not a directive requiring national transposition; it is a directly applicable regulation. And its first enforcement deadline arrives on 11 September 2026, six months from now.</p><p>For CISOs managing NIS2 supply chain programmes, this matters in a specific and underappreciated way: <em>your NIS2 vendor assurance work and your vendors' CRA obligations are part of the same risk problem</em>. They just have different regulators.</p><p><strong>The Architecture of Two Frameworks</strong></p><p>NIS2 and the CRA were designed as complementary instruments, not competing ones. The European Commission's explicit framing is that NIS2 builds operational resilience for the organisations deploying technology, while the CRA builds product resilience for the technology itself. In practice, as one technical analysis puts it, NIS2 ensures secure operations while CRA ensures secure products &#8212; together, they close the loop between governance and engineering.[1]</p><p>That complementarity creates a structural dependency that most CISOs have not fully mapped. Consider the logic:</p><p>Under Article 21(2)(d) of NIS2, essential and important entities must implement supply chain security measures covering their relationships with direct suppliers and service providers. This includes assessing supplier cybersecurity posture and, under some national implementations such as Poland's KSC amendment, formally excluding vendors that represent unacceptable risk.[2]</p><p>Under the CRA, the suppliers those entities are assessing now face their own mandatory obligations: to design products securely, manage vulnerabilities across the product lifecycle, provide security updates, and &#8212; from September 2026 &#8212; to report actively exploited vulnerabilities to ENISA and national CSIRTs within 24 hours.[3]</p><p>The dependency becomes a governance gap when the two frameworks are managed separately. An NIS2 supply chain programme that evaluates operational security practices but does not assess CRA readiness is evaluating half the picture. A vendor can have strong SOC processes and weak product security. Under the converging regulatory architecture, both matter.</p><p><strong>The CRA Timeline in Practice</strong></p><p>December 2024 &#8212; CRA enters into force</p><p>11 June 2026 &#8212; Conformity assessment body framework operational; notified bodies available for product assessments</p><p>11 September 2026 &#8212; Vulnerability and incident reporting obligations apply to all in-scope manufacturers</p><p>11 December 2027 &#8212; Full product conformity requirements enforceable; SBOM obligation formally applies</p><p>The September 2026 deadline is the immediate operational pressure point. It is six months away.</p><p><strong>What the CRA Reporting Deadline Actually Means</strong></p><p>The September 2026 deadline is frequently described as the CRA's "early" reporting obligation, as distinct from the full compliance requirements taking effect in December 2027. That framing, while technically accurate, understates the operational readiness it demands.</p><p>From 11 September 2026, manufacturers of products with digital elements must notify via the ENISA-managed single reporting platform, which distributes information to the relevant national CSIRTs, of actively exploited vulnerabilities and severe security incidents within 24 hours of becoming aware. A 72-hour detailed follow-up is required, and a final report within 14 days (for exploited vulnerabilities) or one month (for severe incidents).[4]</p><p>Critically, this reporting obligation applies to all products currently on the EU market &#8212; including those placed there before December 2027. Legacy products are not exempt from the reporting requirements even though they are exempt from the full conformity obligations.[5]</p><p>The operational implication of this is frequently missed: to report an actively exploited vulnerability within 24 hours, a manufacturer must already know what components are in the product. That knowledge requires a Software Bill of Materials (SBOM) and automated vulnerability monitoring. As one analysis summarises the hidden dependency: if you do not have a complete SBOM and real-time vulnerability monitoring, you will not even know whether your product is affected before the 24-hour clock expires.[6]</p><p>For CISOs, this has a direct supply chain governance implication. The NIS2 question "does this vendor have good security practices?" needs to be supplemented by a CRA-adjacent question: "does this vendor have the product transparency infrastructure to know when their products are compromised and report it within 24 hours?" Those are different questions with different answers.</p><p><strong>The Supply Chain Accountability Chain</strong></p><p>The CRA does not restrict accountability to the original manufacturer. It establishes what one legal analysis describes as a cascading chain of responsibility throughout the product lifecycle: manufacturers bear primary obligations, while importers and distributors assume secondary duties including verification of compliance and reporting of non-conformities to competent authorities.[7]</p><p>This matters for essential entities in a way that is not obvious from the NIS2 framework alone. Under the CRA:</p><p>Importers must verify that manufacturers have met CRA obligations before placing products on the EU market. Distributors must verify that the product bears CE marking and that required conformity documentation is present. Any entity in the supply chain that modifies a product in a way that affects its security characteristics takes on manufacturer obligations for the modified product.</p><p>The governance implication: an essential entity that procures digital products &#8212; servers, endpoint software, network equipment, IoT devices, industrial control systems &#8212; is operating in a supply chain that now has legally enforceable product-security accountability at multiple levels. When something goes wrong with a product that has been placed on the EU market without CRA conformity, the question of who failed their obligations runs from the original developer through importers and distributors to the end customer.</p><p>One analysis of the converging frameworks captures this clearly: a product that fails CRA requirements can create downstream compliance risk for customers subject to NIS2 or DORA.[8] The supply chain is not just a source of operational risk &#8212; it is a source of regulatory exposure.</p><p><strong>The SBOM as a Shared Governance Instrument</strong></p><p>The Software Bill of Materials has been discussed in cybersecurity governance circles for several years as a supply chain transparency tool. The CRA makes it something more specific: a product conformity instrument with regulatory teeth.</p><p>Under Annex I of the CRA, manufacturers are explicitly required to identify and document vulnerabilities and components contained in their products, including by drawing up a software bill of materials in a commonly used, machine-readable format covering at least top-level dependencies. That obligation becomes fully enforceable from December 2027. However, it is a practical prerequisite for the September 2026 reporting deadline: to report an actively exploited vulnerability within 24 hours, a manufacturer must already know what components are in the product. Market surveillance authorities may request the SBOM on a reasoned basis once the full framework applies.[9]</p><p>The NIS2 Directive does not explicitly require SBOMs from entities or their suppliers. But the ENISA guidance on NIS2 supply chain security &#8212; which functions as an interpretive guide for supervisory authorities during inspections &#8212; reinforces the expectation that essential entities can demonstrate software component transparency across their vendor relationships.[10]</p><p>In practice, the SBOM is becoming the point where CRA product obligations and NIS2 supply chain assurance converge. A vendor that can produce a current, machine-readable SBOM on request is demonstrating both CRA readiness and the kind of software transparency that NIS2 supply chain governance programmes should be requiring. A vendor that cannot is signalling both a CRA compliance risk and a NIS2 assurance gap.</p><p>CISOs designing or upgrading vendor risk programmes should treat SBOM availability as a tier-one assessment criterion for technology suppliers, not a future aspiration.</p><p><strong>What Has Not Changed &#8212; And What This Means</strong></p><p>It is worth being precise about scope. The CRA applies to manufacturers, importers, and distributors of products with digital elements. It does not directly impose obligations on essential entities as <em>operators</em> of those products &#8212; at least not under CRA's own provisions. The product-security obligations in September 2026 fall on the vendors, not the buyers.</p><p>This can create a false sense of separation. The argument runs: "CRA is a vendor problem; we just need to buy from compliant vendors." That argument has three weaknesses.</p><p>First, the September 2026 reporting deadline is live in six months. The SBOM and vulnerability management infrastructure required to meet a 24-hour reporting obligation is not trivial to build. Multiple industry analyses published in 2025 and early 2026 note that many manufacturers are still treating CRA as a 2027 problem &#8212; systematically underestimating that the September reporting obligation requires operational readiness now.[6]</p><p>Second, NIS2 supply chain obligations require essential entities to assess and manage the cybersecurity posture of their suppliers. CRA non-compliance is a material factor in that assessment. The inability to report an exploited vulnerability within 24 hours is not a hypothetical risk &#8212; it is an information asymmetry that leaves the downstream customer operating on outdated incident intelligence.</p><p>Third, the January 2026 NIS2 amendments explicitly acknowledged that supply chain questionnaire approaches are inadequate and signalled a shift toward certification-based supply chain assurance. The CRA's CE marking and conformity assessment framework is designed to operate at the product level &#8212; and the logical direction of the Commission's regulatory architecture is that these two certification layers will eventually reinforce each other. CISOs who design their vendor risk programmes around CRA readiness now are building toward a more coherent compliance model, not running ahead of it.</p><p><strong>What CISOs Should Do Now</strong></p><p>The governance response to converging NIS2 and CRA obligations is not complex, but it requires deliberate action rather than deferred attention.</p><p><strong>Map the product exposure in your vendor base</strong></p><p>Identify which of your critical vendors are manufacturers, importers, or distributors of products with digital elements and therefore subject to CRA obligations. This is not all vendors &#8212; it is specifically those whose products involve software, firmware, or network connectivity placed on the EU market. Prioritise vendors of OT systems, endpoint software, network infrastructure, and IoT equipment.</p><p><strong>Add CRA readiness to vendor risk assessments</strong></p><p>For technology product vendors, current NIS2 supply chain questionnaires assess operational security practices. Add a CRA readiness dimension: does this vendor have SBOM capability? Do they have a coordinated vulnerability disclosure process? Are they prepared to notify ENISA within 24 hours of an actively exploited vulnerability in their products? These are answerable questions &#8212; and the answers differentiate mature vendors from those that will struggle in September.</p><p><strong>Treat SBOM availability as a procurement criterion</strong></p><p>For new technology procurements, require that vendors can produce a machine-readable SBOM for relevant products on request. This is not yet a universal market expectation, but it is the direction of regulatory travel and should be embedded in procurement criteria and contractual terms now rather than retrofitted later.</p><p><strong>Brief the board on converging regulatory exposure</strong></p><p>NIS2 management accountability provisions mean that supply chain governance failures carry personal liability. The CRA adds a product-security dimension to that liability landscape. Boards that understand only the NIS2 operational compliance obligations are missing part of the risk picture. The message to deliver: your supply chain is now subject to two overlapping regulatory frameworks, with different deadlines and different accountability chains &#8212; and managing the intersection is a board-level governance question.</p><p><strong>Engage vendors proactively on the September deadline</strong></p><p>Essential entities have market influence with their technology suppliers. Using that influence proactively &#8212; asking vendors about their CRA readiness before the September deadline, rather than discovering non-compliance after an incident &#8212; is both a risk management and a vendor relationship action. Vendors that know their customers are tracking CRA readiness have an additional incentive to prepare.</p><p><strong>The Governance Question the Frameworks Are Asking</strong></p><p>The emerging European cybersecurity framework is no longer separating operational security from product security. NIS2 governs how organisations operate. The CRA governs the security properties of the technology they depend on.</p><p>For CISOs, the practical consequence is direct: vendor risk management can no longer stop at organisational controls. It must extend into product transparency, component visibility, and lifecycle security commitments.</p><p>Two regulators, one supply chain, and increasingly one governance problem. The organisations that manage them as a unified challenge will be better positioned for the enforcement environment ahead. The organisations that manage them as parallel compliance projects will discover the intersection the hard way.</p><p>&nbsp;</p><p><strong>References</strong></p><p>[1] <a href="https://avatao.com/cra-vs-nis2-whats-the-difference-and-why-both-matter-for-secure-development/">Avatao &#8212; CRA vs NIS2: What's the Difference and Why Both Matter (2025)</a></p><p>[2] <a href="https://nflo.tech/knowledge-base/ict-supply-chain-security-vendor-audit-nis2/">nflo.tech &#8212; ICT Supply Chain Security: Vendor Audit and NIS2 Compliance (January 2026)</a></p><p>[3] <a href="https://digital-strategy.ec.europa.eu/en/policies/cra-reporting">European Commission &#8212; CRA Reporting Obligations (accessed March 2026)</a></p><p>[4] <a href="https://www.bclplaw.com/en-US/events-insights-news/the-cyber-resilience-act-is-rewriting-the-rules-of-digital-products-safety.html">Bryan Cave Leighton Paisner &#8212; The Cyber Resilience Act Is Rewriting the Rules of Digital Products Safety</a></p><p>[5] <a href="https://digital-strategy.ec.europa.eu/en/policies/cra-summary">European Commission &#8212; The Cyber Resilience Act: Summary of the Legislative Text (accessed March 2026)</a></p><p>[6] <a href="https://www.keysight.com/blogs/en/tech/nwvs/2025/09/11/one-year-countdown-to-eu-cra-compliance-september-11-2026-changes-everything">Keysight &#8212; One Year Countdown to EU CRA Compliance: September 11, 2026 Changes Everything (September 2025)</a></p><p>[7] <a href="https://www.bclplaw.com/en-US/events-insights-news/the-cyber-resilience-act-is-rewriting-the-rules-of-digital-products-safety.html">Bryan Cave Leighton Paisner &#8212; The Cyber Resilience Act Is Rewriting the Rules of Digital Products Safety (November 2025)</a></p><p>[8] <a href="https://www.scanoss.com/post/cyber-resilience-act-requirements-cra">SCANOSS &#8212; Cyber Resilience Act (CRA) Requirements Explained (March 2026)</a></p><p>[9] <a href="https://www.txone.com/blog/cra-guide-for-manufacturers/">TXOne Networks &#8212; The Cyber Resilience Act: A Guide for Manufacturers (April 2025)</a></p><p>[10] <a href="https://nflo.tech/knowledge-base/ict-supply-chain-security-vendor-audit-nis2/">nflo.tech &#8212; ICT Supply Chain Security: Vendor Audit and NIS2 Compliance (January 2026)</a></p><p>[11] <a href="https://www.hoganlovells.com/en/publications/eu-cyber-resilience-act-getting-ready-for-cra-compliance-in-2026">Hogan Lovells &#8212; EU Cyber Resilience Act: Key 2026 Milestones Toward CRA Compliance</a></p>]]></content:encoded></item><item><title><![CDATA[The Rules Changed Before You Finished the First Draft]]></title><description><![CDATA[The European Commission&#8217;s January 2026 cybersecurity package came with a reassuring headline: simplification. Clearer scope. Reduced compliance costs. Harmonised controls. For organisations still mid-implementation on NIS2 &#8212; and for boards who only recently acknowledged that cybersecurity is their problem &#8212; that framing is dangerously comfortable.]]></description><link>https://www.varthalitis.eu/p/the-rules-changed-before-you-finished-the-first-draft-2</link><guid isPermaLink="false">https://www.varthalitis.eu/p/the-rules-changed-before-you-finished-the-first-draft-2</guid><dc:creator><![CDATA[Angelos Varthalitis]]></dc:creator><pubDate>Thu, 05 Mar 2026 16:31:26 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/229e5217-12da-473f-bf4e-63759ba26697_1200x630.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iDNj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312c7437-88f8-4832-bbc3-a73b1975d567_1200x630.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iDNj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312c7437-88f8-4832-bbc3-a73b1975d567_1200x630.webp 424w, https://substackcdn.com/image/fetch/$s_!iDNj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312c7437-88f8-4832-bbc3-a73b1975d567_1200x630.webp 848w, https://substackcdn.com/image/fetch/$s_!iDNj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312c7437-88f8-4832-bbc3-a73b1975d567_1200x630.webp 1272w, https://substackcdn.com/image/fetch/$s_!iDNj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312c7437-88f8-4832-bbc3-a73b1975d567_1200x630.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iDNj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312c7437-88f8-4832-bbc3-a73b1975d567_1200x630.webp" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/312c7437-88f8-4832-bbc3-a73b1975d567_1200x630.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;The Rules Changed Before You Finished the First Draft&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="The Rules Changed Before You Finished the First Draft" title="The Rules Changed Before You Finished the First Draft" srcset="https://substackcdn.com/image/fetch/$s_!iDNj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312c7437-88f8-4832-bbc3-a73b1975d567_1200x630.webp 424w, https://substackcdn.com/image/fetch/$s_!iDNj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312c7437-88f8-4832-bbc3-a73b1975d567_1200x630.webp 848w, https://substackcdn.com/image/fetch/$s_!iDNj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312c7437-88f8-4832-bbc3-a73b1975d567_1200x630.webp 1272w, https://substackcdn.com/image/fetch/$s_!iDNj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312c7437-88f8-4832-bbc3-a73b1975d567_1200x630.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a><p>The European Commission&#8217;s January 2026 cybersecurity package came with a reassuring headline:&nbsp;<em>simplification</em>. Clearer scope. Reduced compliance costs. Harmonised controls. For organisations still mid-implementation on NIS2 &#8212; and for boards who only recently acknowledged that cybersecurity is their problem &#8212; that framing is dangerously comfortable.</p><p>The reality is more operationally complex. The Commission has proposed substantive changes to a directive that many Member States only recently transposed, and which some are still in the process of implementing. For CISOs, the question is not whether the amendments are welcome &#8212; many of them are &#8212; but what to do when the rules shift before the ink on your current compliance programme has dried.</p><p>This article unpacks what actually changed, what the governance implications are, and what CISOs should do right now.</p><h2>What the Commission Actually Proposed</h2><p>On 20 January 2026, the European Commission published a targeted proposal to amend the NIS2 Directive (Directive (EU) 2022/2555), as part of a broader package that also introduced a revised Cybersecurity Act (CSA2). The amendments build on the earlier Digital Omnibus Package from November 2025, which introduced the first wave of streamlining measures &#8212; including a single incident reporting entry point covering NIS2, GDPR, DORA, eIDAS, and the CER Directive.<sup>[1]</sup></p><p>The January 2026 proposal goes further. Its stated goals: clarify scope, harmonise technical measures, introduce certification-based compliance pathways, and strengthen cross-border supervision through an expanded ENISA mandate.<sup>[2]</sup></p><p>These are genuine improvements. But the timeline matters enormously.</p><h2>The Compliance Paradox</h2><p>The most operationally significant fact about the January 2026 amendments is buried in the fine print: organisations should not expect to benefit from them anytime soon.</p><p>The proposal must first complete trilogue negotiations between the European Parliament and the Council, with political agreement targeted for early 2027. After adoption, Member States will have a 12-month transposition period. Only then will the Commission begin issuing detailed technical requirements at Article 21 level.<sup>[3]</sup></p><p>The practical implication: the harmonised, simplified NIS2 that the Commission is describing will not be operationally real for most organisations until 2028 or later. In the meantime, current national rules apply in full. The compliance programmes CISOs are running today remain the operative framework.</p><p>This creates a governance dilemma that boards are poorly equipped to navigate. The moment they hear &#8220;simplification is coming,&#8221; the instinct is to defer investment and wait. That is precisely the wrong response &#8212; and part of the CISO&#8217;s job right now is to pre-empt it.</p><h2>What Actually Changes: The Five Threads</h2><h3>1. Gold-Plating Gets a Ceiling</h3><p>One of the more consequential changes targets the fragmentation created by Member State gold-plating. Under NIS2 as originally transposed, it is a minimum-harmonisation directive &#8212; meaning Member States can impose additional obligations beyond the baseline. Belgium, for example, introduced a mandatory coordinated vulnerability disclosure policy on top of Article 21 requirements.<sup>[4]</sup></p><p>The January 2026 proposal changes this in a specific and important way. Where the Commission adopts implementing acts specifying technical, methodological, or sectoral risk-management measures under Article 21(5), Member States will no longer be permitted to impose further national requirements for those measures. This effectively shifts the definition of core cybersecurity controls to the EU level.<sup>[5]</sup></p><p>For CISOs managing compliance programmes across multiple EU jurisdictions, this is significant. It means that once Article 21 implementing acts are in place, a centralised control framework can be applied without customisation to each Member State&#8217;s additional preferences. The compliance overhead of running parallel documentation per jurisdiction &#8212; currently estimated at &#8364;150,000&#8211;&#8364;300,000 annually for a mid-sized organisation operating across five Member States &#8212; has a defined end state.<sup>[6]</sup></p><p>The caveat: this ceiling only applies&nbsp;<em>where</em>&nbsp;the Commission has issued implementing acts. It has not yet done so under existing NIS2 authority. The gold-plating problem persists until the new framework is fully operational.</p><h3>2. Certification as Compliance Proof</h3><p>The most board-friendly development in the package is the introduction of certification as a pathway to demonstrating NIS2 compliance &#8212; and the explicit limitation it places on supervisory authorities.</p><p>Under the proposal, organisations will be able to rely on European cybersecurity certification schemes, including future entity-level cyber-posture certifications, to demonstrate compliance with NIS2 risk-management obligations. Critically, where certification demonstrates compliance, competent authorities will not be permitted to subject the entity to security audits.<sup>[7]</sup></p><p>This is a strategic shift. Certification moves from being a voluntary quality label to being a core compliance and risk management instrument. For multinationals, it creates the possibility of a portable, EU-recognised evidence pack that reduces duplicative supervisory demands across jurisdictions.<sup>[8]</sup></p><p>The governance implication is clear: investment in certification schemes is no longer purely defensive or reputational. It carries a direct regulatory return &#8212; audit exemption. That is a board-level argument for security investment that CISOs have rarely had available to them.</p><h3>3. The Supply Chain Questionnaire Problem Gets Acknowledged</h3><p>The Commission explicitly recognises in the proposal that NIS2 supply-chain obligations have generated burdensome and inconsistent supplier questionnaires, often cascading obligations down the supply chain.<sup>[9]</sup></p><p>This matters not just as a compliance signal but as a governance validation. The &#8220;questionnaire flood&#8221; is a real operational problem &#8212; organisations receiving dozens of overlapping, inconsistently scoped security assessments from customers and partners, each calibrated to a different national interpretation of Article 21. The Commission&#8217;s acknowledgement creates the foundation for a more standardised, certification-based approach to supply chain assurance.</p><p>For CISOs managing vendor risk programmes, the direction of travel is toward EU-level certification schemes replacing ad hoc questionnaire-based assurance. The practical implication is that vendor risk governance frameworks should be designed now to accommodate certification-based evidence rather than being locked into proprietary questionnaire formats.</p><h3>4. Ransomware Reporting Gets Granular &#8212; and Legally Exposed</h3><p>The proposal introduces a significant new disclosure requirement for ransomware incidents. Under the amended framework, incident reports for ransomware attacks must include whether the attack was detected, the attack vector, and whether mitigation measures were implemented.<sup>[10]</sup></p><p>More significantly, national authorities will be empowered to request additional information, including whether a ransom demand was made, whether a ransom was paid, the amount paid, the payment method, the recipient, and details of any crypto-asset service providers involved.<sup>[11]</sup></p><p>This is not a simplification measure. It is a substantive expansion of incident disclosure obligations with direct legal and commercial implications. Organisations that pay ransoms &#8212; and the data suggests a significant proportion still do &#8212; will now face disclosure requirements that carry potential regulatory, contractual, and reputational consequences.</p><p>CISOs should be updating their incident response playbooks and legal privilege frameworks now. The decision to pay or not pay a ransom has always carried legal risk; under the proposed amendments, it carries disclosure risk regardless of the decision made.</p><h3>5. New Entities Enter Scope</h3><p>The proposal expands the scope of NIS2 to include providers of European Digital Identity Wallets, providers of European Business Wallets, and operators of submarine data transmission infrastructure.<sup>[12]</sup>&nbsp;Providers of Digital Identity and Business Wallets would be classified as essential entities irrespective of their size.</p><p>Additionally, a new &#8220;small mid-cap&#8221; enterprise category is introduced: entities employing fewer than 750 persons with annual turnover of &#8364;150 million or less, or a balance sheet total of &#8364;129 million or less. These entities would, as a main rule, be classified as important rather than essential &#8212; reducing supervisory intensity for approximately 22,500 companies.<sup>[13]</sup></p><p>For CISOs at organisations currently classified as essential, this reclassification at the margins does not change obligations. But it does affect supply chain exposure: important entities face lower supervisory pressure, which may affect the assurance they provide to essential entity partners.</p><h2>What Has Not Changed</h2><p>It is worth being explicit about what the January 2026 amendments do not affect.</p><p><strong>Article 21 obligations remain in force.</strong>&nbsp;The risk management measures &#8212; covering incident handling, business continuity, supply chain security, encryption, access control, and multi-factor authentication &#8212; apply now under current national transpositions. The amendments do not suspend or defer them.</p><p><strong>Board accountability is unchanged.</strong>&nbsp;NIS2&#8217;s requirement that management bodies approve and oversee cybersecurity risk management measures, and the provision for personal liability in cases of serious negligence, remains fully operative. The amendments do not reduce management exposure.</p><p><strong>Enforcement timelines are unchanged.</strong>&nbsp;The 24-hour early warning, 72-hour detailed notification, and one-month final report structure remains in place. The Digital Omnibus single entry point, when operational, will streamline&nbsp;<em>how</em>&nbsp;reports are filed &#8212; not&nbsp;<em>when</em>.</p><h2>What CISOs Should Do Right Now</h2><p>The January 2026 amendments create a specific governance challenge: how to run a compliance programme in a framework that is formally mid-revision, while enforcement under the current framework is active and escalating.</p><p>The answer is not to pause. It is to build compliance architecture that is adaptive rather than point-in-time.</p><p><strong>Proceed with current Article 21 compliance programmes in full.</strong>&nbsp;The amendments do not provide grounds for deferral. Enforcement is active across transposed jurisdictions and supervisory activity is accelerating through 2026.<sup>[14]</sup></p><p><strong>Design for certification from the start.</strong>&nbsp;Where possible, align current control frameworks and evidence packs with the EU cybersecurity certification schemes that will eventually serve as compliance proof. The investment made now in certification-compatible documentation is not wasted &#8212; it is the foundation of the future compliance model.</p><p><strong>Update ransomware response playbooks immediately.</strong>&nbsp;The expanded disclosure requirements for ransomware incidents represent a near-term operational and legal risk. Incident response plans, legal privilege frameworks, and ransom decision authorities should all be reviewed before the next incident, not after.</p><p><strong>Prepare the board for the timeline reality.</strong>&nbsp;The narrative that &#8220;NIS2 is being simplified&#8221; will reach boardrooms before the nuance does. CISOs should proactively brief boards on what the amendments mean in practice: the benefits are real but years away; current obligations apply in full; and the compliance investments being made now are not being made redundant.</p><p><strong>Reassess supply chain governance.</strong>&nbsp;The Commission&#8217;s acknowledgement of the questionnaire problem signals a shift toward certification-based supply chain assurance. Vendor risk frameworks should be redesigned now to accommodate that model.</p><h2>The Governance Test</h2><p>The January 2026 NIS2 amendments are, in a technical sense, a simplification exercise. In a governance sense, they are something more demanding: a test of whether organisations have built compliance programmes with the maturity to absorb regulatory evolution without losing momentum.</p><p>The organisations that will struggle are those that treated NIS2 as a point-in-time certification exercise &#8212; a project with a start date and an end date. The organisations that will benefit from the amendments, when they arrive, are those that have built compliance as an operational capability: adaptive, evidence-based, and embedded in governance rather than delegated to a project team.</p><p>For CISOs, the message is unchanged from what it has always been. The threat landscape does not wait for regulatory certainty. Neither should you.</p><h2>References</h2><p>[1]&nbsp;<a href="https://www.twobirds.com/en/insights/2025/european-cybersecurity-regulatory-update-nis2-and-beyond?ref=varthalitis.ghost.io">Bird &amp; Bird &#8212; European Cybersecurity Regulatory Update NIS2 and Beyond</a>&nbsp;(accessed March 2026)</p><p>[2]&nbsp;<a href="https://www.insideprivacy.com/data-security/cybersecurity/european-commission-proposes-targeted-amendments-to-nis2-to-simplify-compliance-and-align-with-proposed-cybersecurity-act-2/?ref=varthalitis.ghost.io">Inside Privacy &#8212; European Commission Proposes Targeted Amendments to NIS2</a>, 23 January 2026</p><p>[3]&nbsp;<a href="https://privacymatters.dlapiper.com/2026/02/eu-nis2-update-eu-moves-to-harmonise-cyber-controls-refine-scope-and-add-new-in-scope-entities/?ref=varthalitis.ghost.io">DLA Piper &#8212; NIS2 Update: EU Moves to Harmonise Cyber Controls</a>, February 2026</p><p>[4]&nbsp;<a href="https://www.insideprivacy.com/data-security/cybersecurity/european-commission-proposes-targeted-amendments-to-nis2-to-simplify-compliance-and-align-with-proposed-cybersecurity-act-2/?ref=varthalitis.ghost.io">Inside Privacy &#8212; European Commission Proposes Targeted Amendments to NIS2</a>, 23 January 2026</p><p>[5]&nbsp;<a href="https://www.mcdermottlaw.com/insights/new-eu-cybersecurity-package-what-the-proposed-reforms-mean-for-companies-in-the-eu/?ref=varthalitis.ghost.io">McDermott Will &amp; Emery &#8212; New EU Cybersecurity Package</a>, February 2026</p><p>[6]&nbsp;<a href="https://maiky.io/blog/nis-2-amendments-what-smes-need-to-know?ref=varthalitis.ghost.io">Maiky.io &#8212; NIS2 Amendments: What SMEs Need to Know</a>, February 2026</p><p>[7]&nbsp;<a href="https://www.mcdermottlaw.com/insights/new-eu-cybersecurity-package-what-the-proposed-reforms-mean-for-companies-in-the-eu/?ref=varthalitis.ghost.io">McDermott Will &amp; Emery &#8212; New EU Cybersecurity Package</a>, February 2026</p><p>[8]&nbsp;<a href="https://iapp.org/news/a/eu-cybersecurity-reboot-practical-impacts-of-the-proposed-nis2-and-csa2-reforms?ref=varthalitis.ghost.io">IAPP &#8212; EU Cybersecurity Reboot: Practical Impacts of the Proposed NIS2 and CSA2 Reforms</a>, 2026</p><p>[9]&nbsp;<a href="https://www.mcdermottlaw.com/insights/new-eu-cybersecurity-package-what-the-proposed-reforms-mean-for-companies-in-the-eu/?ref=varthalitis.ghost.io">McDermott Will &amp; Emery &#8212; New EU Cybersecurity Package</a>, February 2026</p><p>[10]&nbsp;<a href="https://www.insideprivacy.com/data-security/cybersecurity/european-commission-proposes-targeted-amendments-to-nis2-to-simplify-compliance-and-align-with-proposed-cybersecurity-act-2/?ref=varthalitis.ghost.io">Inside Privacy &#8212; European Commission Proposes Targeted Amendments to NIS2</a>, 23 January 2026</p><p>[11]&nbsp;<a href="https://www.insideprivacy.com/data-security/cybersecurity/european-commission-proposes-targeted-amendments-to-nis2-to-simplify-compliance-and-align-with-proposed-cybersecurity-act-2/?ref=varthalitis.ghost.io">Ibid.</a></p><p>[12]&nbsp;<a href="https://technologyquotient.freshfields.com/post/102me35/eu-cybersecurity-package-commission-proposes-targeted-nis2-amendments-and-a-revi?ref=varthalitis.ghost.io">Freshfields &#8212; EU Cybersecurity Package: Commission Proposes Targeted NIS2 Amendments</a>, January 2026</p><p>[13]&nbsp;<a href="https://technologyquotient.freshfields.com/post/102me35/eu-cybersecurity-package-commission-proposes-targeted-nis2-amendments-and-a-revi?ref=varthalitis.ghost.io">Ibid.</a></p><p>[14]&nbsp;<a href="https://www.insideprivacy.com/european-union-2/what-to-watch-in-2026-key-eu-privacy-cybersecurity-developments/?ref=varthalitis.ghost.io">Inside Privacy &#8212; What to Watch in 2026: Key EU Privacy &amp; Cybersecurity Developments</a>, 28 January 2026</p>]]></content:encoded></item><item><title><![CDATA[AI Is Not Just Innovation. It Is Now Part of Your Threat Model.]]></title><description><![CDATA[Over the last year, I&#8217;ve sat in more AI strategy conversations than I can count.The energy in those rooms is familiar.]]></description><link>https://www.varthalitis.eu/p/ai-is-not-just-innovation-it-is-now-part-of-your-threat-model</link><guid isPermaLink="false">https://www.varthalitis.eu/p/ai-is-not-just-innovation-it-is-now-part-of-your-threat-model</guid><dc:creator><![CDATA[Angelos Varthalitis]]></dc:creator><pubDate>Tue, 03 Mar 2026 05:29:44 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4fc1503a-76c7-464e-8dca-546df7fdc019_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aSPC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe445210d-1687-4d82-bbd8-2853a1635f66_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aSPC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe445210d-1687-4d82-bbd8-2853a1635f66_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!aSPC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe445210d-1687-4d82-bbd8-2853a1635f66_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!aSPC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe445210d-1687-4d82-bbd8-2853a1635f66_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!aSPC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe445210d-1687-4d82-bbd8-2853a1635f66_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aSPC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe445210d-1687-4d82-bbd8-2853a1635f66_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e445210d-1687-4d82-bbd8-2853a1635f66_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;AI Is Not Just Innovation. It Is Now Part of Your Threat Model.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="AI Is Not Just Innovation. It Is Now Part of Your Threat Model." title="AI Is Not Just Innovation. It Is Now Part of Your Threat Model." srcset="https://substackcdn.com/image/fetch/$s_!aSPC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe445210d-1687-4d82-bbd8-2853a1635f66_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!aSPC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe445210d-1687-4d82-bbd8-2853a1635f66_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!aSPC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe445210d-1687-4d82-bbd8-2853a1635f66_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!aSPC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe445210d-1687-4d82-bbd8-2853a1635f66_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a><p>Over the last year, I&#8217;ve sat in more AI strategy conversations than I can count.The energy in those rooms is familiar. Optimism. Curiosity. A little urgency. Sometimes competitive anxiety. AI is framed as acceleration, efficiency, advantage.What I hear less often is a sober discussion about how AI changes our exposure.The latest 2026 X-Force Threat Intelligence Index from IBM, published by IBM X-Force, provides data that should rebalance that conversation. Not because it is dramatic.Because it is directional.</p><p>The report shows a 44% year-over-year increase in exploitation of public-facing applications. Forty percent of incidents began with vulnerability exploitation rather than phishing. Ransomware and extortion groups grew by 49%. More than 300,000 AI and chatbot credentials were observed for sale in underground markets.Those numbers are not random spikes. They reflect a shift in tempo. Attackers are becoming faster. More automated. Less dependent on deep expertise. AI reduces the time needed for reconnaissance. It assists with scripting. It helps iterate payloads. It personalizes communication at scale.The barrier to entry is lowering. And when barriers lower, volume rises.</p><div><hr></div><p><strong>The Compression of Time</strong></p><p>What concerns me most is not the sophistication of AI itself. It is the compression it introduces. In cybersecurity, time has always been a silent variable. Time to detect. Time to patch. Time to respond. Time to recover.If reconnaissance once took days and now takes hours, if exploit development can be accelerated with model assistance, if campaigns can be refined dynamically, then the defender&#8217;s margin for error shrinks.</p><p>The 40% statistic around vulnerability exploitation is particularly telling. For years, phishing dominated board-level discussions. Now exploitation of exposed systems is leading initial access. This does not mean phishing has disappeared. It means automation has matured. When attackers can scan, analyze, and prioritize weaknesses at scale, exposure becomes more visible to them than it is to us.</p><p><em>That is not a technology problem alone. It is a governance problem.</em></p><div><hr></div><p><strong>Identity Is Quietly Becoming the Center of Gravity</strong></p><p>The discovery of more than 300,000 AI-related credentials for sale should not be dismissed as an isolated statistic. AI platforms increasingly sit inside workflows. They connect to APIs. They process internal context. They often inherit permissions that were granted for convenience rather than scrutiny. Compromise one of those accounts, and you may not just gain access to a tool. You gain insight into how an organization thinks, drafts, analyzes, and sometimes even decides.Identity has been called the new perimeter for years. With AI, that statement becomes literal. If we are not monitoring AI platform access with the same rigor as privileged accounts, we are behind.</p><div><hr></div><p><strong>The Governance Gap</strong></p><p>In executive settings, I often hear thoughtful discussions about how AI can improve productivity and reduce cost. These are valid conversations. But they must be matched with equally disciplined questions:</p><p>How does AI change our threat model?</p><p>Where does AI sit in our risk register?</p><p>What new attack paths does it introduce?</p><p>Are we adjusting our incident response assumptions?</p><p><strong>Attackers do not wait for policy cycles. They experiment in real time.</strong></p><p>Enterprises, understandably, operate within structure and oversight. That structure is necessary. But it also means that we must be deliberate about anticipating risk, not reacting to headlines. AI does not remove fundamentals. It intensifies them. If 40% of attacks begin with vulnerability exploitation, patch discipline and exposure management remain decisive. If ransomware groups are growing by nearly 50%, resilience planning cannot be symbolic.</p><p><em>Sophisticated technology does not compensate for neglected basics.</em></p><div><hr></div><p><strong>Leadership in a Compressed Environment</strong></p><p>AI is leverage. It increases capability on both sides. The difference is constraint. Adversaries innovate without governance. Enterprises innovate within it. That constraint is not a weakness. It is the price of operating responsibly. But responsible leadership requires clarity. AI is no longer just an innovation topic. It is an enterprise risk variable. It changes the speed of attack, the value of identity, and the scale at which exploitation can occur. The question for 2026 is not whether your organization is experimenting with AI. It is whether your threat model reflects the fact that your adversaries already are.</p><p><strong>If that discussion is not happening at the highest level, it should be.</strong></p>]]></content:encoded></item><item><title><![CDATA[When Principles Meet Contracts: What Anthropic Got Right — and Dangerously Wrong]]></title><description><![CDATA[The confrontation between Anthropic and the Pentagon that culminated on February 27, 2026 is already being framed as a battle between AI ethics and government overreach.]]></description><link>https://www.varthalitis.eu/p/when-principles-meet-contracts-what-anthropic-got-right-and-dangerously-wrong</link><guid isPermaLink="false">https://www.varthalitis.eu/p/when-principles-meet-contracts-what-anthropic-got-right-and-dangerously-wrong</guid><dc:creator><![CDATA[Angelos Varthalitis]]></dc:creator><pubDate>Sat, 28 Feb 2026 19:49:10 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9b816ef5-406c-40a6-b52d-09f54ad8eff4_1600x900.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p-xA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfe221e2-b572-4c87-b1b6-bc36bee1aecb_1600x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p-xA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfe221e2-b572-4c87-b1b6-bc36bee1aecb_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!p-xA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfe221e2-b572-4c87-b1b6-bc36bee1aecb_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!p-xA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfe221e2-b572-4c87-b1b6-bc36bee1aecb_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!p-xA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfe221e2-b572-4c87-b1b6-bc36bee1aecb_1600x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p-xA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfe221e2-b572-4c87-b1b6-bc36bee1aecb_1600x900.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bfe221e2-b572-4c87-b1b6-bc36bee1aecb_1600x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;When Principles Meet Contracts: What Anthropic Got Right &#8212; and Dangerously Wrong&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="When Principles Meet Contracts: What Anthropic Got Right &#8212; and Dangerously Wrong" title="When Principles Meet Contracts: What Anthropic Got Right &#8212; and Dangerously Wrong" srcset="https://substackcdn.com/image/fetch/$s_!p-xA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfe221e2-b572-4c87-b1b6-bc36bee1aecb_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!p-xA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfe221e2-b572-4c87-b1b6-bc36bee1aecb_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!p-xA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfe221e2-b572-4c87-b1b6-bc36bee1aecb_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!p-xA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbfe221e2-b572-4c87-b1b6-bc36bee1aecb_1600x900.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a><p>The confrontation between Anthropic and the Pentagon that culminated on February 27, 2026 is already being framed as a battle between AI ethics and government overreach. That framing is partially correct. But as someone who has spent over two decades managing enterprise security, vendor relationships, and regulatory compliance, I see something else entirely: a case study in vendor governance failure with consequences that will reshape how AI companies engage with government clients.<br>Let me be clear upfront. My position is not binary. On surveillance, Anthropic is right. On autonomous weapons, the Pentagon is right. And on vendor governance, Anthropic made a serious, avoidable mistake.</p><p><strong>Where Anthropic Is Right: The Surveillance Line</strong><br>Mass surveillance of a country&#8217;s own citizens is not a gray area. It is a documented, historical pattern of institutional abuse. From COINTELPRO to PRISM, from the Church Committee findings to the Snowden revelations, the evidence is consistent: when governments acquire surveillance infrastructure, the scope expands, the oversight erodes, and citizens pay the price.<br><strong>COINTELPRO</strong> (1956&#8211;1971) was a covert FBI program that surveilled, infiltrated, discredited, and disrupted American political organizations &#8212; targeting civil rights leaders, anti-Vietnam War groups, feminist organizations, and the Black Panther Party. It operated through illegal wiretapping, forged documents, anonymous harassment campaigns, and physical infiltration of lawful political groups. Its existence was unknown to the public until March 1971, when activists broke into an FBI field office in Media, Pennsylvania and leaked classified documents to the press. [1] The U.S. Senate later concluded that COINTELPRO &#8220;was a sophisticated vigilante program aimed squarely at preventing the exercise of First Amendment rights of speech and association.&#8221; [2]<br><strong>The Church Committee</strong> (1975&#8211;1976) was a U.S. Senate select committee formed specifically in response to COINTELPRO and emerging reports of intelligence agency abuse. Chaired by Senator Frank Church, it investigated the CIA, NSA, FBI, and IRS and uncovered a catalog of previously unknown programs: Operation SHAMROCK (the NSA intercepting all telegrams entering and leaving the U.S. since 1945), Project MINARET (warrantless surveillance of American citizens including Martin Luther King Jr., Jane Fonda, and even Senator Church himself), CIA assassination plots against foreign leaders, and mass mail-opening operations. [3] Senator Walter Mondale publicly warned during the hearings that the NSA&#8217;s capabilities &#8220;could be used by President &#8216;A&#8217; in the future to spy upon the American people, to chill and interrupt political dissent.&#8221; [4] The committee&#8217;s findings directly led to the creation of the Foreign Intelligence Surveillance Act (FISA) in 1978.<br><strong>PRISM</strong> (revealed 2013) was an NSA classified program that collected stored internet communications &#8212; including emails, photos, documents, audio, and video &#8212; directly from the servers of nine major technology companies including Microsoft, Google, Apple, Facebook, Yahoo, Skype, and YouTube. It operated under Section 702 of the FISA Amendments Act and was revealed to account for approximately 91% of the roughly 250 million internet communications the NSA collected each year. [5] The program&#8217;s existence was completely unknown to the public until it was disclosed by Edward Snowden.<br><strong>The Snowden Revelations</strong> (2013) documented the full scope of NSA surveillance through thousands of leaked classified documents provided to journalists Glenn Greenwald and Laura Poitras. Beyond PRISM, Snowden exposed XKeyscore &#8212; an analytical tool that allowed NSA analysts to search through vast databases of emails, online chats, and browsing histories of millions of individuals without prior authorization. [6] He also revealed bulk telephone metadata collection requiring Verizon to hand over records of millions of Americans&#8217; calls daily, global financial data tracking, and extensive surveillance of allied foreign governments. In September 2020, a U.S. federal court ruled that the NSA&#8217;s bulk phone metadata collection program exposed by Snowden was illegal and likely unconstitutional. [7]<br>The pattern across all four cases is identical: programs launched with narrow stated purposes, expanded far beyond original scope, operated with minimal or no oversight, and only exposed through leaks or external investigation &#8212; never through internal accountability.<br>Anthropic&#8217;s refusal to allow Claude to be used as an engine for domestic mass surveillance is not ideological posturing. It is a reasonable safeguard grounded in this evidence. An AI model capable of processing vast amounts of unstructured data, integrated into classified government networks, used to monitor citizens at scale &#8212; that is a fundamentally different proposition than any previous surveillance tool. The potential for abuse is not hypothetical. It is historically documented.<br>On this point, I support Anthropic&#8217;s position without reservation. Certain lines should not be crossed regardless of contractual pressure, and the mass surveillance of citizens is one of them.</p><p><strong>Where the Pentagon Is Right: The Weapons Autonomy Argument</strong><br>The question of autonomous weapons is more nuanced, and here my view diverges from Anthropic&#8217;s.<br>When a sovereign military procures a technology for lawful defense purposes, it operates within an existing legal framework &#8212; international humanitarian law, rules of engagement, civilian oversight, command accountability. These are not perfect systems, but they are established governance structures with centuries of development behind them.<br>A private AI vendor inserting its own restrictions on weapons targeting decisions is a qualitatively different kind of intervention. It assumes that Anthropic&#8217;s judgment on acceptable military use supersedes the legal and operational frameworks that already govern these decisions. That is a significant overreach for a commercial technology provider.<br>If today&#8217;s AI models are not reliable enough for autonomous targeting &#8212; and there are serious technical arguments that they are not &#8212; that concern should be addressed through contractual reliability standards, testing requirements, and operational guardrails <strong>negotiated upfront</strong>. Not through unilateral vendor restrictions imposed after deployment.<br>The Pentagon&#8217;s position that it must retain operational flexibility for all lawful use cases is not unreasonable. That is how defense procurement works.</p><p><strong>Where Anthropic Failed: The Vendor Governance Mistake</strong><br>This is where I need to be direct, because the industry commentary has largely missed this point.<br>Anthropic accepted a contract with the Department of Defense. They operated within classified environments. They became embedded in mission-critical systems &#8212; reportedly including the operation to capture Nicol&#225;s Maduro. [8] And then, after achieving that position of operational dependency, they attempted to impose restrictions that should have been negotiated before the relationship began.<br>In any mature vendor management framework &#8212; ISO 27001, NIST SP 800-161, or basic procurement governance &#8212; this behavior would trigger an immediate red flag. It is not how responsible vendors operate.<br>Think of the analogy in enterprise security: imagine a SIEM vendor, after deployment in your SOC, informing you that they have reviewed your use cases and are uncomfortable with certain threat hunting activities. You would not accept it. You would escalate it. You would terminate the contract and flag that vendor in every future assessment.<br>The DoD&#8217;s reaction, while politically charged in its execution, follows exactly this logic. A critical vendor changing terms on embedded, operational infrastructure is a supply chain risk by definition.<br>If Anthropic had genuine ethical objections to certain military use cases &#8212; and some of those objections are legitimate &#8212; <strong>the time to raise them was before signing</strong>. The options were straightforward: negotiate explicit use restrictions in the contract upfront, decline the engagement entirely, or accept the operational realities of defense contracting. What they could not do, with any integrity, was take the contract, become critical infrastructure, and then attempt to impose conditions mid-relationship.<br>Dario Amodei is a brilliant researcher and a principled leader. But this decision reflects a gap between AI safety expertise and enterprise vendor governance experience. Those are different disciplines, and conflating them created this crisis.</p><p><strong>What This Means for the Industry</strong><br>This situation will not be the last of its kind. As AI models become embedded in government systems, healthcare infrastructure, financial services, and critical national systems, the tension between vendor ethics and client operational requirements will intensify.<br>The lesson for AI companies is not to abandon principles. It is to operationalize them earlier in the engagement cycle. Ethical red lines belong in contract negotiations, not in operational disputes after deployment.<br>The lesson for organizations procuring AI is equally clear. Vendor AI governance is now a distinct risk category that your existing frameworks may not adequately address. You need to assess not just what an AI vendor&#8217;s model can do, but what conditions the vendor will unilaterally impose on how you use it &#8212; and when.<br>The lesson for policymakers and regulators is that the gap between AI safety principles and procurement law is real, consequential, and urgently needs to be addressed. We need frameworks that allow AI companies to maintain ethical boundaries without creating operational crises in the institutions that depend on their technology.</p><p><strong>Conclusion</strong><br>Anthropic drew two lines. One of them &#8212; surveillance &#8212; was the right line to draw. The other &#8212; weapons autonomy &#8212; overstepped what a commercial vendor should be dictating to a sovereign military operating within legal frameworks.<br>But the deeper failure was not about where the lines were drawn. It was about when. Principles have no credibility if they only appear after the contract is signed, the systems are deployed, and the dependency is established.<br>The AI industry is navigating genuinely new territory. The companies that will earn lasting trust &#8212; from governments, enterprises, and the public &#8212; are those that build governance into the relationship from the first conversation, not those that impose it at the moment of maximum leverage.<br>That is not ethics. That is vendor risk management. And right now, the industry needs both.</p><p><strong>References</strong><br>[1] Federal Bureau of Investigation. COINTELPRO. FBI Records: The Vault. <a href="https://vault.fbi.gov/cointel-pro?ref=varthalitis.ghost.io">https://vault.fbi.gov/cointel-pro</a><br>[2] United States Senate Select Committee on Intelligence. Senate Select Committee to Study Governmental Operations with Respect to Intelligence Activities (Church Committee). U.S. Senate Historical Office. <a href="https://www.senate.gov/about/powers-procedures/investigations/church-committee.htm?ref=varthalitis.ghost.io">https://www.senate.gov/about/powers-procedures/investigations/church-committee.htm</a><br>[3] Church, F. (1975). Curtailment of the National Security State: The Church Senate Committee of 1975&#8211;1976. Frank Church Institute, Boise State University. <a href="https://www.boisestate.edu/sps-frankchurchinstitute?ref=varthalitis.ghost.io">https://www.boisestate.edu/sps-frankchurchinstitute</a><br>[4] PBS Frontline. Pre-Emption: The Church Committee Hearings &amp; The FISA Court. <a href="https://www.pbs.org/wgbh/pages/frontline/homefront/preemption/churchfisa.html?ref=varthalitis.ghost.io">https://www.pbs.org/wgbh/pages/frontline/homefront/preemption/churchfisa.html</a><br>[5] Electronic Privacy Information Center. EPIC v. DOJ &#8211; PRISM. <a href="https://epic.org/documents/epic-v-doj-prism/?ref=varthalitis.ghost.io">https://epic.org/documents/epic-v-doj-prism/</a><br>[6] MacAskill, E. &amp; Greenwald, G. (2013, July 31). XKeyscore: NSA tool collects &#8216;nearly everything a user does on the internet&#8217;. The Guardian.<br>[7] Wikipedia. Snowden effect. <a href="https://en.wikipedia.org/wiki/Snowden_effect?ref=varthalitis.ghost.io">https://en.wikipedia.org/wiki/Snowden_effect</a> &#8212; referencing United States Court of Appeals for the Ninth Circuit ruling, September 2020.<br>[8] Axios. Trump moves to blacklist Anthropic&#8217;s Claude from government work (February 27, 2026). <a href="https://www.axios.com/2026/02/27/anthropic-pentagon-supply-chain-risk-claude?ref=varthalitis.ghost.io">https://www.axios.com/2026/02/27/anthropic-pentagon-supply-chain-risk-claude</a></p>]]></content:encoded></item><item><title><![CDATA[Securing What You Didn't Build: The CISO's AI Governance Playbook How to take control of AI risk when the deployment already happened without you]]></title><description><![CDATA[The Accountability Gap Nobody Wants to Talk About]]></description><link>https://www.varthalitis.eu/p/securing-what-you-didnt-build-the-cisos-ai-governance-playbook-how-to-take-control-of-ai-risk-when-the-deployment-already-happened-without-you</link><guid isPermaLink="false">https://www.varthalitis.eu/p/securing-what-you-didnt-build-the-cisos-ai-governance-playbook-how-to-take-control-of-ai-risk-when-the-deployment-already-happened-without-you</guid><dc:creator><![CDATA[Angelos Varthalitis]]></dc:creator><pubDate>Thu, 26 Feb 2026 08:55:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cLT9!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6e49acf-9c42-4d01-b9e7-1a13a04d9488_400x400.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>The Accountability Gap Nobody Wants to Talk About</strong></p><p>There is a pattern playing out in organizations across every sector right now. The board approves an AI strategy. The CTO selects a vendor. The business units begin deploying tools. And somewhere along the way &#8212; usually after the fact &#8212; the CISO is handed responsibility for securing it all.</p><p>This is not anecdotal. The SANS 2025 AI Survey, one of the most comprehensive annual assessments of AI's impact on security, found that 100% of organizations plan to incorporate generative AI within the next year, yet it identifies a "concerning lack of security team involvement in governing GenAI" as one of its most significant findings. Only 20% of security professionals report even limited involvement in governing generative AI at the enterprise level [1].</p><p>The result is a structural accountability gap: CISOs are expected to own the risk of systems they had no hand in designing, selecting, or deploying.</p><p>This has always been part of the job &#8212; CISOs have never controlled every technology decision in their organizations. But AI changes the stakes fundamentally. Traditional software executes instructions. Agentic AI makes decisions and takes autonomous actions. It operates continuously, at machine speed, touching sensitive data across systems in ways that are often invisible to the security team.</p><p>As Cybersecurity Insiders' 2025 State of AI Data Security report puts it plainly: "You cannot secure an AI agent you do not identify, and you cannot govern what you cannot see" [2].</p><p>The question for CISOs is not whether to accept this responsibility &#8212; it will be assigned regardless. The question is how to establish enough control to manage it responsibly.</p><div><hr></div><p><strong>Why Traditional Security Frameworks Break Under AI</strong></p><p>Most organizations are making a critical mistake: treating AI security as a simple extension of their existing privacy and data protection frameworks. The CSA and Google Cloud 2025 State of AI Security and Governance report &#8212; based on 300 responses from IT and security professionals &#8212; found that organizations are widely "failing to account for new, AI-specific threats," with data exposure (52%) dominating concerns while genuinely AI-native risks like model integrity compromise (12%) and data poisoning (10%) barely register [3].</p><p>This approach fails for three reasons.</p><p>First, AI introduces a new class of actor into your environment. An AI agent is not a user, and it is not traditional software. It is an autonomous, non-human identity with persistent access to data, systems, and actions. Your existing IAM policies were built for humans. They assume deliberate, observable behavior. According to Obsidian Security's 2025 AI Agent Security Landscape report, AI agents routinely hold 10 times more privileges than required, and 90% of agents are currently over-permissioned [4]. Applying human-centric identity models to machine-speed actors creates compounding blind spots.</p><p>Second, the attack surface is fundamentally different. The 2025 Cybersecurity Insiders report found that 76% of security professionals identify autonomous AI agents as the hardest systems to secure, while 57% lack the ability to block risky AI actions in real time [2]. The threat is not only external: a compromised or misdirected agent can operate under valid credentials, execute legitimate-looking workflows, and exfiltrate data before any detection threshold is triggered.</p><p>Third, and most critically, visibility is the core problem &#8212; not the perimeter. The same Cybersecurity Insiders report found that nearly half of organizations have no visibility into AI usage, and another third have only minimal insight, leaving most enterprises unsure where AI is operating or what data it touches [2].</p><div><hr></div><p><strong>A Practical Governance Framework for the CISO</strong></p><p>What follows is not a compliance checklist. It is an operational framework for CISOs who need to establish meaningful control over AI risk in environments where deployment has already outpaced governance.</p><p><strong>Step 1: Build Your AI Asset Inventory</strong></p><p>Before anything else, you need to know what AI is running in your environment. This means active discovery &#8212; not relying on what the business reports, because shadow AI is a growing and material problem. According to Acuvity's 2025 State of AI Security report, 49% of organizations expect a shadow AI incident within the next 12 months [5]. Use your existing DLP, CASB, and network monitoring tools to detect API calls to known AI services. Conduct departmental audits. The goal is a living inventory that maps every AI tool, the data it accesses, the actions it can take, and who owns it. This inventory is the foundation of everything that follows.</p><p><strong>Step 2: Establish a Formal AI Procurement Gate</strong></p><p>Every AI tool &#8212; whether enterprise platform or departmental point solution &#8212; must pass through a security review before deployment. The CSA/Google Cloud research found that organizations with comprehensive governance policies are nearly twice as likely to successfully adopt agentic AI (46%) compared to those with only partial policies (25%) [3]. The gate should include: data handling and residency assessment, third-party risk evaluation, review of autonomous action scope, and explicit sign-off from security, legal, and data protection before any pilot proceeds.</p><p><strong>Step 3: Apply Zero Trust Principles to AI Agents</strong></p><p>Treat every AI agent as a privileged non-human identity. This means least-privilege access scoped to the minimum data and systems required for each specific function. It means full action logging covering every tool invocation, every data access, every output. It means runtime guardrails that can block unsafe actions before they execute. Microsoft's 2026 Cyber Pulse AI Security Report, drawing on first-party telemetry from tens of thousands of deployed agents, calls this explicitly: organizations must apply Zero Trust principles to AI agents with the same rigor applied to privileged human users, including identity enforcement, scope boundaries, and behavioral monitoring [6].</p><p><strong>Step 4: Establish Continuous AI Monitoring</strong></p><p>Static assessments are insufficient for systems that adapt over time. The 2025 Acuvity State of AI Security report found that 38% of organizations identify the runtime phase as their most vulnerable AI security point &#8212; the moment when AI components interact with real data and users in production [5]. Monitoring must be continuous: prompts and outputs, tool calls, agent-to-agent interactions, data access patterns, and behavioral drift. Build this into SOC workflows now, before volume makes it unmanageable.</p><p><strong>Step 5: Reframe AI Risk for the Boardroom</strong></p><p>The CISO's ability to govern AI depends significantly on how well they communicate AI risk upward. Technical explanations of model vulnerabilities, prompt injection, or data poisoning will not move a board to action. What will is connecting AI risk to what boards already care about.</p><p>Proofpoint's 2025 Voice of the CISO report &#8212; based on 1,600 CISOs across 16 countries &#8212; found that boards now rank business valuation as their top concern following a cyberattack, up from the bottom of the list in previous years [7]. That is the hook. An ungoverned AI agent accessing sensitive customer data is not a technical incident &#8212; it is a GDPR and NIS2 data breach with notification obligations and financial penalties. An AI system making autonomous decisions in a regulated process is an EU AI Act audit finding waiting to happen. Frame it this way: every AI deployment without governance is an unquantified liability. The cost of governance is known and controllable. The cost of a regulatory action or breach is not.</p><div><hr></div><p><strong>The Bigger Picture</strong></p><p>The data is unambiguous. Boardroom alignment with CISOs has declined from a high of 84% in 2024 to 64% this year <a href="https://www.proofpoint.com/us/newsroom/press-releases/proofpoint-2025-voice-ciso-report?ref=varthalitis.ghost.io">Proofpoint</a> &#8212; even as AI adoption accelerates and the CISO's scope of responsibility expands. The CSA/Google Cloud research found that only 26% of organizations have comprehensive AI security governance in place, and that the gap between those organizations and the majority still developing partial policies is widening across every dimension: leadership awareness, workforce readiness, and confidence in AI protection [3].</p><p>The CISO's impossible position &#8212; securing what you didn't build &#8212; is not going to resolve itself. The pace of AI adoption will not slow to accommodate governance maturity. The governance must be built while the deployment is already in motion.</p><p>That is difficult work. But it is the work. The CISOs who build it now &#8212; methodically, with business language that boards can act on &#8212; will be the ones who remain credible partners to their organizations rather than perpetual firefighters.</p><p>The goal is not to stop AI. It is to make AI governable. Those are very different ambitions. Only one of them is achievable.</p><div><hr></div><p><strong>Sources</strong></p><p>[1] SANS Institute. <em>SANS 2025 AI Survey: Measuring AI's Impact on Security Three Years Later</em>. Available at: <a href="https://www.elastic.co/pdf/sans-2025-ai-survey-elastic.pdf?ref=varthalitis.ghost.io">https://www.elastic.co/pdf/sans-2025-ai-survey-elastic.pdf</a></p><p>[2] Cybersecurity Insiders / Cyera. <em>2025 State of AI Data Security Report</em>. December 2025. Available at: <a href="https://www.prnewswire.com/news-releases/cybersecurity-insiders-warns-ai-adoption-is-outpacing-governance-in-new-2025-report-302630674.html?ref=varthalitis.ghost.io">https://www.prnewswire.com/news-releases/cybersecurity-insiders-warns-ai-adoption-is-outpacing-governance-in-new-2025-report-302630674.html</a></p><p>[3] Cloud Security Alliance / Google Cloud. <em>State of AI Security and Governance Survey Report</em>. December 2025. Available at: <a href="https://cloudsecurityalliance.org/press-releases/2025/12/18/csa-and-google-cloud-study-finds-governance-maturity-is-strongest-predictor-of-ai-readiness?ref=varthalitis.ghost.io">https://cloudsecurityalliance.org/press-releases/2025/12/18/csa-and-google-cloud-study-finds-governance-maturity-is-strongest-predictor-of-ai-readiness</a></p><p>[4] Obsidian Security. <em>2025 AI Agent Security Landscape: Players, Trends, and Risks</em>. January 2026. Available at: <a href="https://www.obsidiansecurity.com/blog/ai-agent-market-landscape?ref=varthalitis.ghost.io">https://www.obsidiansecurity.com/blog/ai-agent-market-landscape</a></p><p>[5] Acuvity. <em>2025 State of AI Security</em>. Available at: <a href="https://acuvity.ai/2025-state-of-ai-security/?ref=varthalitis.ghost.io">https://acuvity.ai/2025-state-of-ai-security/</a></p><p>[6] Microsoft Security. <em>Cyber Pulse: An AI Security Report</em>. 2026. Available at: <a href="https://www.microsoft.com/en-us/security/security-insider/emerging-trends/cyber-pulse-ai-security-report?ref=varthalitis.ghost.io">https://www.microsoft.com/en-us/security/security-insider/emerging-trends/cyber-pulse-ai-security-report</a></p><p>[7] Proofpoint. <em>2025 Voice of the CISO Report</em>. August 2025. Available at: <a href="https://www.proofpoint.com/us/newsroom/press-releases/proofpoint-2025-voice-ciso-report?ref=varthalitis.ghost.io">https://www.proofpoint.com/us/newsroom/press-releases/proofpoint-2025-voice-ciso-report</a></p>]]></content:encoded></item><item><title><![CDATA[Shadow AI Is No Longer the CISO's Problem — It's the Board's Liability]]></title><description><![CDATA[In my previous article, I wrote about the CISO's dilemma: how we speak a language of risk while boards hear only revenue.]]></description><link>https://www.varthalitis.eu/p/shadow-ai-is-no-longer-the-cisos-problem-its-the-boards-liability</link><guid isPermaLink="false">https://www.varthalitis.eu/p/shadow-ai-is-no-longer-the-cisos-problem-its-the-boards-liability</guid><dc:creator><![CDATA[Angelos Varthalitis]]></dc:creator><pubDate>Tue, 24 Feb 2026 07:57:20 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3267286c-d1ce-47a0-86f8-cfc0663ae801_708x400.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Y8VK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F882fafa4-d2c8-4035-b661-668137eca454_708x400.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Y8VK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F882fafa4-d2c8-4035-b661-668137eca454_708x400.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Y8VK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F882fafa4-d2c8-4035-b661-668137eca454_708x400.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Y8VK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F882fafa4-d2c8-4035-b661-668137eca454_708x400.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Y8VK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F882fafa4-d2c8-4035-b661-668137eca454_708x400.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Y8VK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F882fafa4-d2c8-4035-b661-668137eca454_708x400.jpeg" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/882fafa4-d2c8-4035-b661-668137eca454_708x400.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Shadow AI Is No Longer the CISO's Problem &#8212; It's the Board's Liability&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Shadow AI Is No Longer the CISO's Problem &#8212; It's the Board's Liability" title="Shadow AI Is No Longer the CISO's Problem &#8212; It's the Board's Liability" srcset="https://substackcdn.com/image/fetch/$s_!Y8VK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F882fafa4-d2c8-4035-b661-668137eca454_708x400.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Y8VK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F882fafa4-d2c8-4035-b661-668137eca454_708x400.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Y8VK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F882fafa4-d2c8-4035-b661-668137eca454_708x400.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Y8VK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F882fafa4-d2c8-4035-b661-668137eca454_708x400.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a><p>In my previous article, I wrote about the CISO's dilemma: how we speak a language of risk while boards hear only revenue. The feedback I received confirmed what most of us already know &#8212; the communication gap is real, it is persistent, and it has consequences.</p><p>But something has changed. A new threat has entered the room, and this time it didn't wait for a boardroom presentation to make itself known. It walked in through every employee's laptop, every SaaS subscription approved without IT review, and every AI assistant quietly embedded into daily workflows.</p><p>It's called Shadow AI. And it is no longer a cybersecurity problem the CISO can manage in isolation. It has become a governance liability that sits squarely on the Board's shoulders &#8212; whether they know it or not.</p><div><hr></div><h2>What Shadow AI Actually Means</h2><p>Shadow AI refers to the use of artificial intelligence tools and services within an organisation without the knowledge, approval, or oversight of IT and security teams. Think of employees pasting customer data into ChatGPT to draft a proposal. Developers using an unsanctioned AI coding assistant that sends code snippets to external servers. Finance teams feeding sensitive spreadsheets into AI summarisation tools because it saves them two hours on a Monday morning.</p><p>None of this is malicious. All of it is dangerous.</p><p>The problem with Shadow AI is not that employees are breaking rules. The problem is that organisations have created conditions where the path of least resistance &#8212; the fast, productive, AI-assisted path &#8212; runs directly through a governance blind spot.</p><p>In my doctoral research on AI governance and enterprise cyber risk (I am currently completing a DBA in Artificial Intelligence, with a dissertation focused specifically on Shadow AI and the security ownership gap), I see this pattern repeated across sectors and organisation sizes. The technology has outpaced the governance. And the gap between them is where incidents are born.</p><div><hr></div><h2>From Shadow IT to Shadow Agents: The Escalation Nobody Is Ready For</h2><p>For years, security teams have managed Shadow IT &#8212; the use of unsanctioned applications and cloud services. It was a manageable problem. Visibility tools improved. Policies were written. Training was delivered.</p><p>Shadow AI is different in kind, not just degree.</p><p>Where Shadow IT introduced unauthorised tools, Shadow AI introduces autonomous behaviour. According to Google Cloud's <em>Cybersecurity Forecast 2026</em>, the proliferation of sophisticated, autonomous agents &#8212; often deployed by employees without corporate approval &#8212; "will create invisible, uncontrolled pipelines for sensitive data, leading to data leaks and compliance violations." [1] The report explicitly names this the "Shadow Agent" problem, warning that by 2026, it will escalate into one of the defining security challenges for enterprise organisations.</p><p>This is not a fringe prediction from a vendor with something to sell. It is the assessment of the Google Threat Intelligence Group and Mandiant Consulting, drawn from real-world incident response data.</p><p>These agents act, make decisions, and move data &#8212; often without any human in the loop at the moment it matters. And banning them is not a viable answer. As the same forecast states, banning agents "only drives usage off the corporate network, eliminating visibility entirely." [1] What is needed is governance &#8212; clear, proportionate, and actually enforced governance.</p><div><hr></div><h2>Why This Is Now a Board-Level Issue</h2><p>Here is where boards need to pay attention, and I say this with the same directness I brought to the boardroom communication problem in my previous piece.</p><p><strong>NIS2 and Personal Liability</strong></p><p>Under the NIS2 Directive (Directive EU 2022/2555), organisations classified as essential or important entities are required to implement cybersecurity risk management measures covering the security of their network and information systems, including supply chain security. Article 20 goes further than its predecessor in one critical way: it places personal accountability on management bodies.</p><p>As DLA Piper's analysis of NIS2's management body rules notes, "NIS2 introduces personal accountability for members of the management bodies," requiring that they not only approve but actively oversee the implementation of cybersecurity risk management measures. [2] Article 20(1) mandates this oversight as a standing obligation &#8212; not an annual event.</p><p>The practical consequence is stark. When an employee sends client data to an external AI service without authorisation, that is not simply a rogue action. In regulatory terms, it is a supply chain event &#8212; and Article 21(2)(d) of NIS2 explicitly requires entities to address security in their supply chain relationships, including the services they use. [3] If such an event results in a reportable incident, regulators will not direct their questions at the employee. They will examine the governance structure above them, and whether management was fulfilling its documented oversight obligations.</p><p>Fines for essential entities under NIS2 can reach &#8364;10 million or 2% of global annual turnover &#8212; whichever is greater. [3] And member states have latitude to gold-plate beyond the EU baseline. Germany's national implementation act, which entered into force in December 2025, allows management bodies to be held personally responsible for failures to implement and oversee required measures. [4]</p><p><strong>The EU AI Act: Another Layer of Exposure</strong></p><p>The EU AI Act adds a separate but overlapping regulatory dimension. The prohibition on certain AI practices has applied since February 2025. GPAI model obligations became effective in August 2025. Most remaining obligations, including the full transparency framework under Article 50, apply from August 2026. [5]</p><p>Organisations that deploy AI systems in ways that affect individuals &#8212; including employees &#8212; carry obligations around transparency, risk classification, and human oversight. Deploying an AI tool to screen job applications, assess employee performance, or make customer-facing decisions without proper governance is not just a cybersecurity risk. It is a direct regulatory exposure under a framework with fines of up to &#8364;35 million or 7% of global annual turnover for the most serious violations. [6]</p><p>Critically, analysis of current enterprise readiness suggests that over half of organisations lack systematic inventories of AI systems currently in production. [7] You cannot govern, classify, or demonstrate compliance for AI systems you do not know exist. Shadow AI makes this problem structurally worse.</p><div><hr></div><h2>What the Security Ownership Gap Looks Like in Practice</h2><p>In conversations with security leaders across Europe, a consistent picture emerges. The CISO raises Shadow AI as a risk. It is acknowledged. It is perhaps added to a risk register. And then it sits there, while the organisation continues to grow its AI footprint organically &#8212; department by department, tool by tool &#8212; without a coherent framework for who owns the risk, who monitors it, and who is accountable when something goes wrong.</p><p>This is what I describe in my dissertation as the security ownership gap: the space between identifying a risk and actually governing it. It is not a failure of intent. It is a structural failure. Security teams are asked to monitor a landscape they cannot fully see. Business teams adopt tools they do not fully understand. And boards approve digital transformation strategies without visibility into the shadow infrastructure growing beneath them.</p><p>The gap is not filled by a policy document. It is filled by a decision &#8212; a deliberate, board-level decision to treat AI governance as a strategic priority, not a compliance checkbox.</p><div><hr></div><h2>What Good Governance Actually Looks Like</h2><p>There is no simple answer, but there is a defensible starting framework.</p><p><strong>Visibility before control.</strong> You cannot govern what you cannot see. The first step is an honest inventory of AI tools in use across the organisation &#8212; not just the ones IT has approved, but the ones employees are actually using. This requires a combination of technical discovery and cultural honesty: making it safe to report AI tool usage without fear of punishment.</p><p><strong>Classification before prohibition.</strong> Not every AI tool carries the same risk. A framework that classifies AI usage by data sensitivity, autonomy level, and external connectivity allows proportionate responses. An AI assistant that summarises internal meeting notes is a different risk profile from an agent with API access to your CRM that autonomously contacts customers.</p><p><strong>Ownership before accountability.</strong> Every AI tool in use should have a named owner &#8212; not just a user, but someone accountable for its governance. This means understanding what data it processes, what its terms of service permit, and what the breach notification obligations are if something goes wrong.</p><p><strong>Documented decisions at board level.</strong> Where AI tools carry material risk, the decision to accept, mitigate, transfer, or avoid that risk should be documented and reviewed at the appropriate governance level. Under NIS2 Article 20, supervisory authorities require digital evidence showing that the board set direction, asked questions, issued challenges, and closed the loop on escalation. [8] Signing off on a policy once a year is no longer sufficient.</p><div><hr></div><h2>The Honest Conversation That Needs to Happen</h2><p>Most CISOs I know are not surprised by Shadow AI. They have been watching it grow for two years. What surprises them is the speed of escalation &#8212; from employees using ChatGPT to draft emails, to business units running autonomous agents that touch production data.</p><p>The conversation that needs to happen is not "how do we stop employees using AI." That conversation is over, and AI won. The conversation that needs to happen is: <em>who is accountable for AI risk in this organisation, and what decisions has the board actually made about it?</em></p><p>Because in the absence of that conversation, the default answer is: nobody is accountable, no decisions have been made, and the risk is accumulating quietly in every corner of the business.</p><p>That is not a technology problem. That is a governance failure. And governance failures, as NIS2 is now making very clear, have names attached to them.</p><div><hr></div><h2>A Final Thought</h2><p>In my previous article, I described the CISO's dilemma as the struggle to be heard in a boardroom that only hears revenue. Shadow AI changes that dynamic.</p><p>For the first time, the risk is not something the CISO is warning about in the future tense. It is something happening right now, at scale, across every organisation that has not yet built a framework to govern it.</p><p>AI is simultaneously the most powerful productivity tool most organisations have ever deployed and one of the most unmanaged risk surfaces they have ever created. The revenue conversation and the risk conversation are about to collide.</p><p>The question is not whether boards will have to engage with this. They will. The question is whether they will engage with it before an incident &#8212; or after one.</p><p>I know which conversation I would rather be in.</p><div><hr></div><h3>References</h3><p>[1] Google Cloud, <em>Cybersecurity Forecast 2026</em>, November 2025. Available at: <a href="https://cloud.google.com/blog/topics/threat-intelligence/cybersecurity-forecast-2026/?ref=varthalitis.ghost.io">https://cloud.google.com/blog/topics/threat-intelligence/cybersecurity-forecast-2026/</a></p><p>[2] DLA Piper, <em>NIS2 Directive Explained: Part 2 &#8211; Management Bodies Rules</em>, November 2025. Available at: <a href="https://www.dlapiper.com/en/insights/publications/2025/11/nis2-directive-explained-part-2-management-bodies-rules?ref=varthalitis.ghost.io">https://www.dlapiper.com/en/insights/publications/2025/11/nis2-directive-explained-part-2-management-bodies-rules</a></p><p>[3] Codific, <em>NIS2 Directive: Compliance Guide, Fines &amp; Scope</em>, November 2025. Available at: <a href="https://codific.com/nis-2-directive-compliance-guide-fines-scope/?ref=varthalitis.ghost.io">https://codific.com/nis-2-directive-compliance-guide-fines-scope/</a></p><p>[4] Morrison Foerster, <em>Flipping the NIS2 Switch: What Germany's Implementation Means for 2026 Compliance</em>, December 2025. Available at: <a href="https://www.mofo.com/resources/insights/251208-flipping-the-nis2-switch-what-germanys-implementation?ref=varthalitis.ghost.io">https://www.mofo.com/resources/insights/251208-flipping-the-nis2-switch-what-germanys-implementation</a></p><p>[5] European Commission, <em>Navigating the AI Act</em>, 2025. Available at: <a href="https://digital-strategy.ec.europa.eu/en/faqs/navigating-ai-act?ref=varthalitis.ghost.io">https://digital-strategy.ec.europa.eu/en/faqs/navigating-ai-act</a></p><p>[6] DLA Piper, <em>Latest Wave of Obligations Under the EU AI Act Take Effect</em>, August 2025. Available at: <a href="https://www.dlapiper.com/en-us/insights/publications/2025/08/latest-wave-of-obligations-under-the-eu-ai-act-take-effect?ref=varthalitis.ghost.io">https://www.dlapiper.com/en-us/insights/publications/2025/08/latest-wave-of-obligations-under-the-eu-ai-act-take-effect</a></p><p>[7] Secure Privacy, <em>EU AI Act 2026 Compliance Guide: Key Requirements Explained</em>, 2025. Available at: <a href="https://secureprivacy.ai/blog/eu-ai-act-2026-compliance?ref=varthalitis.ghost.io">https://secureprivacy.ai/blog/eu-ai-act-2026-compliance</a></p><p>[8] <a href="http://isms.online/?ref=varthalitis.ghost.io">ISMS.online</a>, <em>NIS 2 Article 20: What Personal Cyber Liability Means for Every Board Director</em>, October 2025. Available at: <a href="https://www.isms.online/nis-2/enforcement/board-liability/article-20/?ref=varthalitis.ghost.io">https://www.isms.online/nis-2/enforcement/board-liability/article-20/</a></p><div><hr></div><p><em>Angelos Varthalitis is Chief Information Security Officer at Ovivio NL (KidsKonnect), a leading European childcare SaaS platform. With over 20 years of cybersecurity experience across SaaS, energy, and transportation sectors, he holds certifications including CISM and NIS2 Lead Implementer. He is currently completing a Doctorate in Business Administration specialising in Artificial Intelligence, with a dissertation focused on Shadow AI and the security ownership gap in enterprise cyber risk governance. He writes at <a href="https://varthalitis.ghost.io/">www,varthalitis,eu</a>.</em></p>]]></content:encoded></item><item><title><![CDATA[The CISO’s Dilemma: Speaking Security in a Boardroom That Only Hears Revenue]]></title><description><![CDATA[I&#8217;ve sat in enough boardrooms to know how the story goes.]]></description><link>https://www.varthalitis.eu/p/the-cisos-dilemma-speaking-security-in-a-boardroom-that-only-hears-revenue</link><guid isPermaLink="false">https://www.varthalitis.eu/p/the-cisos-dilemma-speaking-security-in-a-boardroom-that-only-hears-revenue</guid><dc:creator><![CDATA[Angelos Varthalitis]]></dc:creator><pubDate>Sat, 21 Feb 2026 21:10:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cLT9!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6e49acf-9c42-4d01-b9e7-1a13a04d9488_400x400.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;ve sat in enough boardrooms to know how the story goes.</p><p>You prepare for weeks. You build the deck. You map the threats, quantify the risks, present the roadmap. You walk in confident. You walk out with 30% of the budget you asked for and a polite suggestion to &#8220;prioritize.&#8221;(Schmitz-Berndt, 2021)</p><p>Three months later, something breaks. And suddenly everyone wants to know why security wasn&#8217;t taken seriously.</p><p>Sound familiar?</p><p>The problem isn&#8217;t technical. It&#8217;s linguistic.</p><p>Boards are not incompetent. They are laser-focused on what they are accountable for &#8212; revenue, growth, shareholder value, regulatory exposure. When a CISO walks in talking about CVEs, threat actors, and zero-day vulnerabilities, the board doesn&#8217;t tune out because they don&#8217;t care. They tune out because they genuinely don&#8217;t know what to do with that information.</p><p>We are speaking Portuguese in a room that only understands Greek.</p><p>The gap between the CISO and the board is not an intelligence gap. It is a translation gap. And for too long, we have expected the board to learn our language instead of learning theirs.</p><p>What boards actually respond to</p><p>Forget the technical jargon. Start asking yourself one question before every board presentation: &#8220;So what does this mean for the business?&#8221;</p><p>A ransomware risk is not a technical problem. It is 72 hours of operational downtime, &#8364;2M in recovery costs, and your company&#8217;s name in every major newspaper. Say that instead.</p><p>A missing patch management process is not a vulnerability gap. It is a direct NIS2 compliance violation that exposes your board members to personal liability. Say that instead.</p><p>Regulatory frameworks like NIS2 are actually a CISO&#8217;s best friend in the boardroom &#8212; not because compliance is the goal, but because liability gets attention that threat intelligence never will. When board members understand that a security failure can land on them personally, the conversation changes fast.</p><p><strong>The other side of the problem</strong></p><p>Here is something we don&#8217;t say out loud enough: sometimes we make it worse.</p><p>We present problems without options. We ask for everything and explain nothing. We lead with worst-case scenarios and then wonder why the board becomes numb to the alarm bells.</p><p>The most effective CISOs I know present like business executives, not security engineers. They bring three options with clear trade-offs &#8212; not one &#8220;right answer.&#8221; They quantify risk in euros, not CVSS scores. They connect every recommendation to a business outcome the board already cares about.</p><p>They make it easy to say yes.</p><p>What I have learned</p><p>After more than two decades in cybersecurity &#8212; across SaaS, energy, transportation, and critical infrastructure &#8212; the hardest lesson I had to learn was this:</p><p>My job is not just to protect the organization. My job is to make the board capable of making informed decisions about risk.</p><p>That is a fundamentally different mandate. And it changes everything about how you walk into that room.</p><p>Stop trying to make them afraid. Start trying to make them informed. Fear creates paralysis. Information creates decisions.</p><p>Three things you can do starting Monday</p><p>First, audit your last board presentation. Count how many times you used technical terminology without a business translation. That number is your communication gap.</p><p>Second, find your CFO before your next board meeting. Understand what financial risks keep them up at night. Then connect your security roadmap directly to those risks. You will be speaking their language before you even enter the room.</p><p>Third, stop presenting problems. Start presenting choices. &#8220;We can accept this risk, mitigate it with X investment, or transfer it through insurance &#8212; here are the trade-offs.&#8221; Boards are decision-making bodies. Give them something to decide.</p><p>The bottom line</p><p>The board is not your obstacle. The board is your most important stakeholder &#8212; and right now, most of us are failing to communicate with them effectively.</p><p>The organizations that will navigate the next wave of cyber threats, regulatory pressure, and AI-driven risk are not necessarily the ones with the biggest security budgets. They are the ones where the CISO and the board are actually speaking the same language.</p><p>That translation starts with us.</p>]]></content:encoded></item></channel></rss>