When Principles Meet Contracts: What Anthropic Got Right — and Dangerously Wrong

The confrontation between Anthropic and the Pentagon that culminated on February 27, 2026 is already being framed as a battle between AI ethics and government overreach. That framing is partially correct. But as someone who has spent over two decades managing enterprise security, vendor relationships, and regulatory compliance, I see something else entirely: a case study in vendor governance failure with consequences that will reshape how AI companies engage with government clients.
Let me be clear upfront. My position is not binary. On surveillance, Anthropic is right. On autonomous weapons, the Pentagon is right. And on vendor governance, Anthropic made a serious, avoidable mistake.
Where Anthropic Is Right: The Surveillance Line
Mass surveillance of a country’s own citizens is not a gray area. It is a documented, historical pattern of institutional abuse. From COINTELPRO to PRISM, from the Church Committee findings to the Snowden revelations, the evidence is consistent: when governments acquire surveillance infrastructure, the scope expands, the oversight erodes, and citizens pay the price.
COINTELPRO (1956–1971) was a covert FBI program that surveilled, infiltrated, discredited, and disrupted American political organizations — targeting civil rights leaders, anti-Vietnam War groups, feminist organizations, and the Black Panther Party. It operated through illegal wiretapping, forged documents, anonymous harassment campaigns, and physical infiltration of lawful political groups. Its existence was unknown to the public until March 1971, when activists broke into an FBI field office in Media, Pennsylvania and leaked classified documents to the press. [1] The U.S. Senate later concluded that COINTELPRO “was a sophisticated vigilante program aimed squarely at preventing the exercise of First Amendment rights of speech and association.” [2]
The Church Committee (1975–1976) was a U.S. Senate select committee formed specifically in response to COINTELPRO and emerging reports of intelligence agency abuse. Chaired by Senator Frank Church, it investigated the CIA, NSA, FBI, and IRS and uncovered a catalog of previously unknown programs: Operation SHAMROCK (the NSA intercepting all telegrams entering and leaving the U.S. since 1945), Project MINARET (warrantless surveillance of American citizens including Martin Luther King Jr., Jane Fonda, and even Senator Church himself), CIA assassination plots against foreign leaders, and mass mail-opening operations. [3] Senator Walter Mondale publicly warned during the hearings that the NSA’s capabilities “could be used by President ‘A’ in the future to spy upon the American people, to chill and interrupt political dissent.” [4] The committee’s findings directly led to the creation of the Foreign Intelligence Surveillance Act (FISA) in 1978.
PRISM (revealed 2013) was an NSA classified program that collected stored internet communications — including emails, photos, documents, audio, and video — directly from the servers of nine major technology companies including Microsoft, Google, Apple, Facebook, Yahoo, Skype, and YouTube. It operated under Section 702 of the FISA Amendments Act and was revealed to account for approximately 91% of the roughly 250 million internet communications the NSA collected each year. [5] The program’s existence was completely unknown to the public until it was disclosed by Edward Snowden.
The Snowden Revelations (2013) documented the full scope of NSA surveillance through thousands of leaked classified documents provided to journalists Glenn Greenwald and Laura Poitras. Beyond PRISM, Snowden exposed XKeyscore — an analytical tool that allowed NSA analysts to search through vast databases of emails, online chats, and browsing histories of millions of individuals without prior authorization. [6] He also revealed bulk telephone metadata collection requiring Verizon to hand over records of millions of Americans’ calls daily, global financial data tracking, and extensive surveillance of allied foreign governments. In September 2020, a U.S. federal court ruled that the NSA’s bulk phone metadata collection program exposed by Snowden was illegal and likely unconstitutional. [7]
The pattern across all four cases is identical: programs launched with narrow stated purposes, expanded far beyond original scope, operated with minimal or no oversight, and only exposed through leaks or external investigation — never through internal accountability.
Anthropic’s refusal to allow Claude to be used as an engine for domestic mass surveillance is not ideological posturing. It is a reasonable safeguard grounded in this evidence. An AI model capable of processing vast amounts of unstructured data, integrated into classified government networks, used to monitor citizens at scale — that is a fundamentally different proposition than any previous surveillance tool. The potential for abuse is not hypothetical. It is historically documented.
On this point, I support Anthropic’s position without reservation. Certain lines should not be crossed regardless of contractual pressure, and the mass surveillance of citizens is one of them.
Where the Pentagon Is Right: The Weapons Autonomy Argument
The question of autonomous weapons is more nuanced, and here my view diverges from Anthropic’s.
When a sovereign military procures a technology for lawful defense purposes, it operates within an existing legal framework — international humanitarian law, rules of engagement, civilian oversight, command accountability. These are not perfect systems, but they are established governance structures with centuries of development behind them.
A private AI vendor inserting its own restrictions on weapons targeting decisions is a qualitatively different kind of intervention. It assumes that Anthropic’s judgment on acceptable military use supersedes the legal and operational frameworks that already govern these decisions. That is a significant overreach for a commercial technology provider.
If today’s AI models are not reliable enough for autonomous targeting — and there are serious technical arguments that they are not — that concern should be addressed through contractual reliability standards, testing requirements, and operational guardrails negotiated upfront. Not through unilateral vendor restrictions imposed after deployment.
The Pentagon’s position that it must retain operational flexibility for all lawful use cases is not unreasonable. That is how defense procurement works.
Where Anthropic Failed: The Vendor Governance Mistake
This is where I need to be direct, because the industry commentary has largely missed this point.
Anthropic accepted a contract with the Department of Defense. They operated within classified environments. They became embedded in mission-critical systems — reportedly including the operation to capture Nicolás Maduro. [8] And then, after achieving that position of operational dependency, they attempted to impose restrictions that should have been negotiated before the relationship began.
In any mature vendor management framework — ISO 27001, NIST SP 800-161, or basic procurement governance — this behavior would trigger an immediate red flag. It is not how responsible vendors operate.
Think of the analogy in enterprise security: imagine a SIEM vendor, after deployment in your SOC, informing you that they have reviewed your use cases and are uncomfortable with certain threat hunting activities. You would not accept it. You would escalate it. You would terminate the contract and flag that vendor in every future assessment.
The DoD’s reaction, while politically charged in its execution, follows exactly this logic. A critical vendor changing terms on embedded, operational infrastructure is a supply chain risk by definition.
If Anthropic had genuine ethical objections to certain military use cases — and some of those objections are legitimate — the time to raise them was before signing. The options were straightforward: negotiate explicit use restrictions in the contract upfront, decline the engagement entirely, or accept the operational realities of defense contracting. What they could not do, with any integrity, was take the contract, become critical infrastructure, and then attempt to impose conditions mid-relationship.
Dario Amodei is a brilliant researcher and a principled leader. But this decision reflects a gap between AI safety expertise and enterprise vendor governance experience. Those are different disciplines, and conflating them created this crisis.
What This Means for the Industry
This situation will not be the last of its kind. As AI models become embedded in government systems, healthcare infrastructure, financial services, and critical national systems, the tension between vendor ethics and client operational requirements will intensify.
The lesson for AI companies is not to abandon principles. It is to operationalize them earlier in the engagement cycle. Ethical red lines belong in contract negotiations, not in operational disputes after deployment.
The lesson for organizations procuring AI is equally clear. Vendor AI governance is now a distinct risk category that your existing frameworks may not adequately address. You need to assess not just what an AI vendor’s model can do, but what conditions the vendor will unilaterally impose on how you use it — and when.
The lesson for policymakers and regulators is that the gap between AI safety principles and procurement law is real, consequential, and urgently needs to be addressed. We need frameworks that allow AI companies to maintain ethical boundaries without creating operational crises in the institutions that depend on their technology.
Conclusion
Anthropic drew two lines. One of them — surveillance — was the right line to draw. The other — weapons autonomy — overstepped what a commercial vendor should be dictating to a sovereign military operating within legal frameworks.
But the deeper failure was not about where the lines were drawn. It was about when. Principles have no credibility if they only appear after the contract is signed, the systems are deployed, and the dependency is established.
The AI industry is navigating genuinely new territory. The companies that will earn lasting trust — from governments, enterprises, and the public — are those that build governance into the relationship from the first conversation, not those that impose it at the moment of maximum leverage.
That is not ethics. That is vendor risk management. And right now, the industry needs both.
References
[1] Federal Bureau of Investigation. COINTELPRO. FBI Records: The Vault. https://vault.fbi.gov/cointel-pro
[2] United States Senate Select Committee on Intelligence. Senate Select Committee to Study Governmental Operations with Respect to Intelligence Activities (Church Committee). U.S. Senate Historical Office. https://www.senate.gov/about/powers-procedures/investigations/church-committee.htm
[3] Church, F. (1975). Curtailment of the National Security State: The Church Senate Committee of 1975–1976. Frank Church Institute, Boise State University. https://www.boisestate.edu/sps-frankchurchinstitute
[4] PBS Frontline. Pre-Emption: The Church Committee Hearings & The FISA Court. https://www.pbs.org/wgbh/pages/frontline/homefront/preemption/churchfisa.html
[5] Electronic Privacy Information Center. EPIC v. DOJ – PRISM. https://epic.org/documents/epic-v-doj-prism/
[6] MacAskill, E. & Greenwald, G. (2013, July 31). XKeyscore: NSA tool collects ‘nearly everything a user does on the internet’. The Guardian.
[7] Wikipedia. Snowden effect. https://en.wikipedia.org/wiki/Snowden_effect — referencing United States Court of Appeals for the Ninth Circuit ruling, September 2020.
[8] Axios. Trump moves to blacklist Anthropic’s Claude from government work (February 27, 2026). https://www.axios.com/2026/02/27/anthropic-pentagon-supply-chain-risk-claude

