The Anthropic vs US government security risk case became official on March 9, 2026, when Anthropic filed two federal lawsuits against the Trump administration challenging a Pentagon order that labelled the company a national security threat. The startup was officially designated a supply chain risk — a label that requires defense contractors to certify they don’t use Anthropic’s models in their work with the Pentagon. The Anthropic vs US government security risk case is the first time in US history that an American AI company has been blacklisted using a designation previously reserved for foreign adversaries.
Background: What Is Anthropic AI and Why Does It Matter?
Anthropic AI is a San Francisco-based artificial intelligence safety company founded in 2021 by Dario Amodei, Daniela Amodei, and other former OpenAI researchers. Its flagship product is Claude — an AI assistant and large language model used across thousands of businesses, government agencies, and consumers worldwide.
Anthropic AI was valued at approximately $61 billion in its most recent funding round — making it one of the most valuable private technology companies in the world. Its Anthropic valuation reflects massive investor confidence from Amazon, Google, and others who have collectively poured billions into the company.
Anthropic federal contracts made the company one of the US government’s primary AI partners. Before the Anthropic vs US government security risk case erupted publicly, Anthropic AI served as an early partner across many US agencies as the government sought to rapidly upgrade its systems with cutting-edge AI technology. The military had been using Claude to process intelligence and targeting data during the ongoing Iran war.
Anthropic AI built its reputation on a core principle — responsible AI development. Two hard limits were central to its Anthropic federal contract terms: Claude would not be used for mass domestic surveillance of Americans, and Claude would not power fully autonomous weapons systems without human decision-making.
Details: The Anthropic vs US Government Security Risk Case — Full Story
How the Anthropic vs US Government Security Risk Case Began
The Anthropic vs US government security risk case began when the Pentagon demanded Anthropic remove all restrictions from its Claude AI model for military use.
The Pentagon wanted to use Anthropic AI for “all lawful purposes,” saying they could not allow a private company to dictate how they could use their tools in a national security emergency. Anthropic AI refused. CEO Dario Amodei met with Defense Secretary Pete Hegseth on February 24, 2026, but the two sides reached no agreement. Amodei stated publicly that AI models were not yet reliable or safe enough for mass surveillance or fully autonomous weapons — and that no amount of government pressure would change Anthropic’s position.
On February 27, 2026, the Trump administration ordered federal agencies and military contractors to halt all business with Anthropic AI. That same day, Defense Secretary Pete Hegseth said Anthropic would be labelled a supply chain risk, adding that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
Trump wrote on Truth Social: “WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about.”
The Anthropic vs US Government Security Risk Case — The Lawsuits
Anthropic filed two complaints against the Department of Defense on Monday — one in the US District Court for the Northern District of California and another in the federal appeals court in Washington DC — after a weeks-long conflict over whether the military should have unrestricted access to Anthropic AI systems.
In the Anthropic vs US government security risk case, Anthropic is asking courts to undo the supply chain risk designation, block its enforcement, and require federal agencies to withdraw directives to drop the company.
In the lawsuit, Anthropic argued the government does not have to agree with its views or use its products — but it cannot employ the power of the state to punish or suppress Anthropic’s expression. Anthropic also argued that “no federal statute authorizes the actions taken here,” claiming the Defense Department’s supply-chain risk designation was issued “without observance of the procedures Congress required.”
What the Anthropic vs US Government Security Risk Case Means for Revenue
The Anthropic vs US government security risk case could jeopardize “hundreds of millions of dollars” in revenue. CFO Krishna Rao said the hit could be much more severe — “across Anthropic’s entire business, the government’s actions could reduce Anthropic’s 2026 revenue by multiple billions of dollars.”
The Anthropic valuation — currently around $61 billion — could face significant pressure if the Anthropic vs US government security risk case is lost and the blacklist remains in force.
Anthropic Federal Contracts vs OpenAI
Just hours after the Anthropic vs US government security risk case escalated publicly, OpenAI struck its own deal with the Pentagon — apparently agreeing to provide its models without the contractual limitations Anthropic had insisted upon. The deal drew sharp criticism, with many questioning whether OpenAI’s contract offered meaningfully different protections. OpenAI later acknowledged the announcement looked “sloppy and opportunistic” and said it was renegotiating some terms.
The Anthropic vs US government security risk case has therefore put the entire AI industry on notice — forcing every major AI company to decide whether it will accept unrestricted military use of its technology or face the consequences Anthropic now faces.
Is the Anthropic vs US Government Security Risk Designation Legal?
Legal experts are deeply sceptical of the government’s position in the Anthropic vs US government security risk case.
Lawyers Michael Endrias and Alan Z. Rozenshtein wrote in Lawfare that the designation “exceeds what the statute authorizes,” that “the required findings don’t hold up,” and that Hegseth’s own public statements “may have doomed the government’s litigation posture before it even begins.”
The supply chain risk designation was designed to prevent foreign adversaries from harming national security systems — making the Anthropic vs US government security risk case the first time the federal government has used this label against an American company.
Quotes
Anthropic AI spokesperson, filing statement on the Anthropic vs US government security risk case: “Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners.”
Anthropic AI blog post, on the Anthropic vs US government security risk case: “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.”
Anthropic AI’s lawsuit, on the Anthropic vs US government security risk case: “Defendants are seeking to destroy the economic value created by one of the world’s fastest-growing private companies. The Challenged Actions inflict immediate and irreparable harm on Anthropic; on others whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance.”
Defense Undersecretary Emil Michael, on potential resolution of the Anthropic vs US government security risk case: “I have a responsibility to the Department of War, and if there was a way to ensure that we had the best technology, I have no ego about it.”
President Trump, on Anthropic AI: “Anthropic has made a disastrous mistake — they are trying to dictate how the military operates.”
Impact: What the Anthropic vs US Government Security Risk Case Means
For Anthropic AI and Its Valuation
The Anthropic vs US government security risk case threatens to cost the company billions in lost Anthropic federal contracts and commercial partnerships. The Anthropic valuation — built in part on government and enterprise trust — faces its most serious test since the company was founded.
Despite the crisis, Anthropic AI’s public profile has only risen. Its Claude AI app surpassed OpenAI’s ChatGPT in the iPhone App Store for the first time the day after the Pentagon announced it would terminate its Anthropic federal contract. The company also said on March 5 that more than one million people are signing up for Claude every day.
For Anthropic Federal Contracts and the AI Industry
The Anthropic vs US government security risk case sets a precedent for every AI company holding Anthropic federal contracts or government partnerships. If the government wins, it establishes that the Pentagon can demand unrestricted use of any AI system — removing all developer-imposed safety guardrails.
If Anthropic wins the Anthropic vs US government security risk case, it establishes that AI companies retain First Amendment protections over their safety policies — and that the supply chain risk designation cannot be weaponised against American companies for policy disagreements.
For AI Safety and Autonomous Weapons
The Anthropic vs US government security risk case is ultimately a fight over who controls the guardrails on the most powerful AI systems in the world. The outcome could shape how other AI companies negotiate restrictions on military use of their technology — with consequences for autonomous weapons development, mass surveillance, and the future of AI governance globally.
Conclusion
The Anthropic vs US government security risk case is one of the most consequential legal battles in the history of artificial intelligence. On one side stands an AI safety company arguing it has a constitutional right to set limits on how its technology is used in warfare. On the other stands a government claiming no private company can constrain the military in a national emergency.
The Anthropic vs US government security risk case will take months or years to resolve in court. But the questions it raises — about AI safety, about government power, and about who controls the technology that may soon decide the outcome of wars — cannot wait that long.
Anthropic AI has drawn a line. The Anthropic vs US government security risk case will determine whether that line holds.



