AI: Pentagon bans Anthropic from contracts and Amodei denounces punitive decision
Summarize this article with:

Anthropic CEO Dario Amodei has just stepped up to the plate in the face of an unprecedented decision by the Trump administration. Washington has designated its company a “risk to the US defense supply chain”, opening the way to an unprecedented legal battle between a large American tech firm and its own government.

Furious man tearing apart Antropic AI logo, Claude in front of Pentagon, military arrests him, dramatic orange sky, explosive political tension, 70s comics style.

In brief

  • Pete Hegseth bans any US military contractor from working with Anthropic.
  • Donald Trump orders all federal agencies to immediately stop using Claude AI.
  • Anthropic rejects two specific uses: fully autonomous weapons and mass surveillance of citizens.
  • A few hours later, OpenAI signed a contract with the Department of Defense.

A sudden breakup after eighteen months of collaboration

It all starts in 2024 with a $200 million contract signed between Anthropic and the Department of Defense. At the time, the company prided itself on being the first advanced AI company to operate on classified U.S. military networks. A reason for pride. A strong commercial argument.

Your first cryptos with Coinbase
This link uses an affiliate program

Eighteen months later, this same contract is at the heart of an open legal confrontation with Washington.

The sticking point is simple: the Pentagon wanted unrestricted access to Claude, Anthropic's flagship model.

The company set two non-negotiable conditions, no integration into fully autonomous weapon systems, no mass domestic surveillance of American citizens. Months of private negotiations have changed nothing. The ultimatum set at 5:01 p.m. Friday passed without agreement. Anthropic hasn't moved.

The administration's response was immediate and brutal. Pete Hegseth designated Anthropic a “national security supply chain risk,” a label usually reserved for foreign companies under hostile influence, such as Huawei or ZTE.

Several jurists have already described this decision as a dangerous precedent. Trump then drove home the point on Truth Social, in capital letters: all federal agencies must “ IMMEDIATELY STOP any use of Anthropic’s technology “.

Anthropic sticks to its positions, OpenAI takes advantage

In front of the CBS News cameras, Dario Amodei conceded nothing. Describing the Pentagon's decision as “unprecedented” and “punitive”, the Anthropic CEO firmly reaffirmed the substance of his position:

These are fundamental things for Americans: the right not to be spied on by the government, the right for our military officers to make their own decisions about the war, and not to entrust them entirely to a machine.

He clarified that he is not opposed to automated weapons in principle, but that current AI models are simply not reliable enough to operate without human supervision in a lethal context. He also called on Congress to quickly legislate to regulate the use of AI in national surveillance programs.

Anthropic announced it will challenge the designation in courtrelying on section 3252 of title 10 of the United States Code, which legally limits the scope of this designation to Department of Defense contracts only.

Meanwhile, OpenAI seized the opportunity. Just a few hours after Hegseth's announcement, Sam Altman confirmed the signing of an agreement to deploy OpenAI models on American military networks. The online reaction was strong, with many seeing it as an endorsement of mass surveillance and the weaponization of AI.

Anthropic chose its principles over its contracts. A risky bet, but consistent with its mission stated since its origins: to develop safe AI. The upcoming legal battle will show whether this posture is tenable in the face of the American administrative machine.

Maximize your Tremplin.io experience with our 'Read to Earn' program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.

Similar Posts