Anthropic Endangers National Security
U.S. Special Operations forces captured Nicolás Maduro in a pre-dawn raid in Caracas on January 3rd. The mission reportedly utilized the military’s Maven Smart System, which is built by Palantir and has Anthropic’s Claude model embedded for data analysis and targeting. Claude is a powerful AI engine.
A month later, in February, Anthropic pushed back at Palantir, demanding that their system could not be used to carry out surveillance of Americans or for kinetic (kill) operations. Anthropic demanded to know how Claude was being used. That demand triggered a furious Pentagon reaction because Palantir and Claude were being used for the planning and operational part of what became the US and Israeli attack on Iran starting on February 28th. Moreover, Pentagon military planning is highly classified and Anthropic, a private company, has no security clearances and even if the Pentagon thought it advisable (which they emphatically did not), they could not respond to Anthropic’s demands.
Had Anthropic prevailed, the Iran operation would have been dangerously compromised. So too would have been US intelligence support to Ukraine, other NATO and Pacific region security operations, possibly also security from nuclear attack.
Anthropic is a privately held company. Its main investors are Amazon, which has invested roughly $4 billion and serves as Anthropic’s primary cloud provider (AWS); Google (Alphabet) which invested $2 billion and integrated Claude into various enterprise services; and Microsoft jointly with Nvidia as major backers.
Amazon has huge contracts with the Department of War and the US government. Amazon (specifically through Amazon Web Services or AWS) is one of the U.S. government’s most vital technology partners. Amazon supplies the “cloud” component of the Joint Warfighting Cloud Capability): It is one of four providers (alongside Google, Microsoft, and Oracle) on this $9 billion contract. It provides “tactical edge” computing, meaning AWS servers are literally used on battlefields and in remote command centers. It is also the lead provider for the CIA and the broader Intelligence Community (IC) under a multi-billion dollar framework that handles classified workloads. In late 2025, Amazon announced a staggering $50 billion investment specifically to expand AI and supercomputing infrastructure for U.S. government customers. This includes building dedicated data centers in the AWS Top Secret, AWS Secret, and GovCloud regions.
Google’s relationship with the U.S. government has shifted from a period of public employee protests (like Project Maven in 2018) to becoming a cornerstone of the nation’s defense and civilian infrastructure. As of March 2026, Google (via Google Public Sector) is one of the “Big Four” cloud providers for the U.S. government, alongside Amazon, Microsoft, and Oracle. In July 2025, Google was awarded a $200 million ceiling contract by the Chief Digital and Artificial Intelligence Office (CDAO).
Nvidia’s relationship with the U.S. government is fundamentally different from that of Amazon or Google. While those companies provide the software and cloud space, Nvidia provides the physical “engine” (GPUs) that powers every single government AI initiative.
As of early 2026, Nvidia has transitioned from a hardware vendor to a strategic national security partner, deeply embedded in the “War Dept” (DoD) and the Department of Energy (DOE).
In 2018 Google signed a contract for Project Maven, using AI to automatically analyze drone footage for the Air Force. Over 4,000 employees signed a petition, and several high-level engineers resigned, arguing Google should not be in the “business of war.” Google famously backed out of the contract, and CEO Sundar Pichai released a set of “AI Principles” that explicitly promised Google would never develop AI for weapons or surveillance that violates human rights. In February 2025, Google quietly updated its official AI Principles. They removed the explicit ban on using AI for weapons and surveillance. The updated language focused on “supporting national security” and ensuring “democracies lead in AI development.” This effectively cleared the legal and internal path for Google to bid on the Pentagon’s most lethal projects. Industry analysts suggest Google felt it was losing billions in revenue to Microsoft and Amazon, who did not have the same self-imposed restrictions.
The Pentagon has imposed a ban on Anthropic that is to take full effect in six months. Secretary of War Hegseth issued a directive stating that no military contractor may conduct any commercial activity with Anthropic if they want to keep their own government contracts. Since the Silicon Valley investor companies in Anthropic all have major defense contracts, they risk losing them, as do other defense contractors including Palantir and the “majors” such as Lockheed and GD.
Anthropic’s lawyers are likely to raise the case of Youngstown Sheet and Tube vs. Sawyer (1952). In that case the Court found President Truman’s attempt to nationalize US steel mills during a national security emergency unconstitutional. Unfortunately for Truman, by the time the case was heard by the Supreme Court, the strike that threatened to stop steel production in the midst of the Korean war, had been resolved, so the national emergency claim went by the wayside. The Anthropic case, however, is not about nationalization, and the Pentagon is under no legal obligation to accept private company demands that imperil national security.
One question that will linger, no matter what, is whether the private sector can impose its own rules on the government, particularly when national security issues are involved. Anthropic came very close to causing a national security disaster that would have imperiled US operations and the lives of American and allied soldiers.



Let me get this straight... National Security depends on AI to make decisions?
That's a bit disturbing - to put it mildly - as we recently learned that AIs (engaging in war games against each other) kept recommending using nuclear strikes in 95% of the cases!
Apparently AI is also hard at work finding elusive targets, like mobile missile launchers, during the ongoing operation. Rumor has it, AI also chooses which targets to engage - which has led to the destruction of a number of decoys - some of the decoys being murals of fighter planes, painted on the tarmac.
No wonder, Anthropic is reluctant to heed the demands. Nobody wants to be involved in some Colossus vs. Guardian scenario, like the 1966 sci-fi short story.
Which can very well be the result, if we let artificial "brains" overrule our own.
this is the most level-headed, clear-eyed article I have read on this topic