China Sounds Alarm Over US Military Use of AI

China formally warned on March 11, 2026, that unrestricted US AI military technology use could trigger a Terminator-style dystopia. China’s defence ministry spokesman Jiang Bin called on the United States to stop giving US AI military technology the power to determine life and death in warfare. The warning came as the Pentagon confirmed it had deployed US AI military technology at scale in the Iran war — and as Anthropic AI, maker of Claude AI, filed two federal lawsuits challenging the Pentagon’s decision to blacklist the company for refusing to remove safety guardrails from its US AI military technology systems.

Background: Why Is US AI Military Technology So Controversial?

US AI military technology refers to the deployment of artificial intelligence systems — including large language models, targeting algorithms, surveillance tools, and autonomous weapons — by the United States armed forces in active combat and intelligence operations.

Claude AI is Anthropic AI’s flagship large language model and was the Pentagon’s most widely deployed frontier AI model — the only such model operating on the Defence Department’s classified systems before the blacklist crisis. Anthropic AI was founded in 2021 and is currently valued at approximately $61 billion. Anthropic AI stock is not publicly traded — but its major investors include Amazon and Google.

Anthropic AI built its reputation on one core principle: US AI military technology should never be used for mass domestic surveillance of Americans or for fully autonomous weapons systems where algorithms decide who lives and who dies — without human decision-making in the loop.

The killing of Iran’s Ayatollah Ali Khamenei in February 2026 showcased how deeply US AI military technology had become embedded in American combat operations. Chinese advisers publicly highlighted how Palantir, Anthropic AI, and Anduril — the so-called “tech right” — drove intelligence gathering, data processing, and operational execution throughout the strikes.

US AI military technology is no longer theoretical. It has already been used to kill. That reality is what China’s warning on March 11 was directly responding to.

Details: China’s Alarm Over US AI Military Technology — Full Story

What China Said About US AI Military Technology

Speaking at a formal defence ministry briefing in Beijing on March 11, 2026, China’s spokesperson Jiang Bin issued Beijing’s most comprehensive public warning against US AI military technology to date.

Jiang Bin stated that the unrestricted application of US AI military technology in warfare could erode ethical constraints and accountability, and risk dangerous technological runaway. He warned that “using AI as a tool to violate the sovereignty of other nations, allowing AI to excessively affect war decisions, and giving algorithms the power to determine life and death” were unacceptable risks of US AI military technology.

Jiang Bin said China opposes any attempt to use US AI military technology to pursue absolute military dominance or undermine the sovereignty of other countries. He added that Beijing believes “human primacy must be upheld in all military applications of AI” and called for “UN centrality” in governing US AI military technology globally.

His warning ended with one of the most widely quoted statements of the US AI military technology debate: “A dystopia depicted in the American film The Terminator could one day come true.”

US AI Military Technology — Confirmed Use in Iran and Venezuela

The Pentagon formally confirmed it had deployed US AI military technology at scale in operations against Iran under Operation Epic Fury. Pentagon spokeswoman Kingsley Wilson defended the use of US AI military technology directly — warning that American warfighters would “never be held hostage by unelected tech executives and Silicon Valley ideology.”

US AI military technology was also reported to have been used in military operations against Venezuela — making the Iran conflict the second active use of frontier AI models in US combat operations in 2026. Global Times — China’s state media outlet — reported both confirmed deployments of US AI military technology as evidence of the US military’s determination to dominate algorithmic warfare.

Anthropic AI vs Pentagon — The US AI Military Technology Safety Dispute

The immediate trigger for China’s formal warning on US AI military technology was the public battle between Anthropic AI and the US Department of Defense.

Anthropic AI infuriated Pentagon chief Pete Hegseth by insisting its Claude AI US AI military technology system should not be used for mass domestic surveillance or fully autonomous weapons. President Trump ordered all federal agencies to cease using Anthropic AI technology. Hegseth then designated Anthropic AI a Supply Chain Risk to National Security — ordering that no military contractor or partner may conduct any commercial activity with Anthropic AI.

The Anthropic AI blacklist effectively removed Claude AI from US AI military technology operations — creating a significant capability gap in the Pentagon’s AI infrastructure at a time of active conflict.

Anthropic AI filed two federal lawsuits against the Trump administration. The lawsuits argued that no federal statute authorised the blacklist and that the government cannot use state power to punish an American company for its position on US AI military technology safety.

Claude AI — The Model at the Centre of the US AI Military Technology Debate

Claude AI was the only frontier AI model operating on the Pentagon’s classified US AI military technology systems before the blacklist crisis. Its removal following the Anthropic AI blacklist raises serious questions about which US AI military technology will replace it — and whether any replacement will maintain the same safety restrictions that Anthropic AI had insisted upon.

OpenAI struck its own deal with the Pentagon to provide its models for US AI military technology without the contractual safety limitations Anthropic AI demanded. AI safety researchers condemned the move — warning it set a dangerous precedent for the governance of US AI military technology globally.

Anthropic AI Stock — What Investors Face

Anthropic AI is a private company and does not have publicly traded Anthropic AI stock. However, the US AI military technology crisis has significant consequences for Anthropic AI’s investors.

Anthropic AI CFO Krishna Rao warned the US AI military technology blacklist could reduce 2026 revenue by multiple billions of dollars. Anthropic AI valuation — approximately $61 billion — faces downward pressure if the blacklist is upheld in court and US AI military technology contracts cannot be recovered.

Despite Anthropic AI stock and valuation concerns, public support for the company has surged since the US AI military technology dispute went public. Claude AI surpassed ChatGPT in the Apple App Store the day after the Pentagon blacklist was announced. More than one million people are now signing up for Claude AI every day — a remarkable response to the US AI military technology controversy.

China’s Own Military AI — The Contradiction at the Heart of US AI Military Technology Governance

While China publicly warned against US AI military technology, Beijing advisers privately gave a very different message. Zheng Yongnian — a leading political scientist and government adviser — warned that China risks repeating historical mistakes if it fails to convert AI into decisive military hard power.

He stated China must speed up its own military AI programme and deepen civil-military fusion to narrow the strategic gap that US AI military technology superiority has created. China’s public warning against US AI military technology and its private drive to accelerate its own military AI reveal a fundamental contradiction — every major power warns against what others are doing while racing to do exactly the same.

Quotes

China Defence Ministry spokesman Jiang Bin, on US AI military technology, March 11, 2026: “Using AI as a tool to violate the sovereignty of other nations, allowing AI to excessively affect war decisions, and giving algorithms the power to determine life and death not only erode ethical restraints and accountability in wars but also risk technological runaway. A dystopia depicted in the American film The Terminator could one day come true.”

Pentagon spokeswoman Kingsley Wilson, defending US AI military technology use: “America’s warfighters supporting Operation Epic Fury and every mission worldwide will never be held hostage by unelected tech executives and Silicon Valley ideology. We will decide, we will dominate, and we will win.”

Anthropic AI CEO Dario Amodei, on Claude AI and US AI military technology limits: “AI models are not yet reliable or safe enough for mass surveillance or fully autonomous weapons. No amount of government pressure will change our position.”

Anthropic AI federal lawsuit, on the US AI military technology blacklist: “Defendants are seeking to destroy the economic value created by one of the world’s fastest-growing private companies. The government’s actions inflict immediate and irreparable harm on Anthropic, on those whose speech will be chilled, and on a global public that deserves robust dialogue on what AI means for warfare.”

Zheng Yongnian, Beijing adviser, on China’s response to US AI military technology: “China must speed up military applications of AI and deepen civil-military fusion to narrow its strategic gap with the United States. China risks repeating historical mistakes if it limits AI mostly to civilian uses and fails to convert frontier technologies into decisive hard power.”

Impact: What the US AI Military Technology Debate Means for the World

For Global AI Governance

China’s warning on US AI military technology marks the first time a major power has formally called for UN-level multilateral governance of military AI since frontier models like Claude AI became embedded in active combat operations. The call for UN centrality in governing US AI military technology sets the stage for a major global governance confrontation in the months ahead.

For Anthropic AI and Claude AI

The US AI military technology dispute has made Anthropic AI and Claude AI globally famous — and globally controversial. The company best known for building safe AI is now at the centre of the world’s most consequential debate about whether any company can set limits on how governments deploy its US AI military technology in warfare.

While Anthropic AI stock concerns weigh on investors short-term, the long-term reputational value of standing up to the Pentagon over US AI military technology ethics may prove more valuable than any government contract.

For the US-China AI Race

The Iran war exposed the true depth of US AI military technology superiority — and the true depth of China’s strategic anxiety. Khamenei’s elimination by US AI military technology-assisted precision strikes sent shockwaves through Beijing. Chinese advisers are now openly debating whether China’s failure to deploy equivalent US AI military technology leaves it dangerously exposed in any future Taiwan conflict.

For the Future of Warfare

The US AI military technology crisis of 2026 has answered one question definitively — artificial intelligence is already being used in active warfare at scale. The question that remains unanswered is who controls it, what limits apply, and what happens when the algorithms make a decision that no human authorised.

Conclusion

China’s March 11, 2026, warning on US AI military technology is one of the most significant official statements on artificial intelligence and war ever made by a major power. The Terminator reference was deliberate — an escalation of the global debate over where US AI military technology ends and autonomous killing begins.

The Anthropic AI dispute with the Pentagon brought this debate into the open. Claude AI — built to be safe — became the symbol of a world asking whether safety and military utility can coexist in the same US AI military technology system.

Anthropic AI stock concerns, Claude AI deployments, and Beijing’s formal warnings are all symptoms of the same reality: US AI military technology has already crossed a threshold in the Iran war from which there is no return.

The algorithms have been used. The kills have been made. The question is no longer whether US AI military technology will shape future wars — it already has. The question is whether the world will govern it before the Terminator stops being a metaphor and starts being the morning news.

FAQs

How does the US military use technology?

It uses advanced tech like drones and autonomous systems, secure communication networks, AI for data and targeting, and cloud/computer systems to gather intelligence, coordinate forces, improve precision, and keep soldiers safer.

Is the US military using AI?

Yes — the U.S. military is using artificial intelligence (AI). They apply AI tools to help analyze intelligence, speed decision‑making, improve planning, and support operations across many areas of defense.

Which type of computer is used by the military?

The military uses supercomputers, rugged computers, embedded systems, and experimental quantum computers.

Scroll to Top