4 Min Read

The Anthropic-Pentagon Showdown is the Biggest AI Policy Story of the Year

Featured Image

In Brief

The Trump administration threatened to blacklist Anthropic from all government contracts after the company refused the Pentagon's demand for unrestricted military use of its AI model, Claude.

It then reached a deal with OpenAI hours later. At issue is who gets to determine how AI is used in government.

What Happened

In late February, a dispute between Anthropic and the Pentagon escalated into the most consequential AI policy confrontation to date. And it moved fast.

It started when Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei and delivered an ultimatum: grant the Pentagon unrestricted military use of Claude, or lose all government contracts. Hegseth has publicly described Anthropic's safety policies as "woke AI" that threaten national security.

Anthropic refused to capitulate. Amodei published a public statement defending the company's position, followed by a direct response to Hegseth's comments. The company also updated its Responsible Scaling Policy, not to loosen restrictions, but to clarify its framework for government partnerships.

Then the Trump administration blacklisted Anthropic, designating the company a "supply chain risk" and ordering federal agencies to stop using its technology. Within hours OpenAI announced its own deal with the Pentagon.

The backlash was immediate. Hundreds of employees at Google and OpenAI signed an open letter supporting Anthropic's stance through the Not Divided initiative. The Senate Defense Committee weighed in. And Anthropic said it would take the Pentagon to court.

The Key Numbers

$30 billion — Anthropic's February 2026 fundraising round, co-led by Peter Thiel's Founders Fund

~12 hours — Time between Anthropic's blacklisting and OpenAI's announced Pentagon deal

6 months — Transition period the government gave agencies to stop using Anthropic, despite calling it an immediate security risk

2 years — Duration Anthropic was the only AI company trusted to operate in classified government settings

This Isn't Simply a Good vs. Evil Story

The contradictions are stacking up on every side. Start with Hegseth's public statement in which he says he's directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Meanwhile, Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to "a better and more patriotic service."

"I could do an hour on that paragraph alone," says SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 200 of The Artificial Intelligence Show. "Each sentence on its own contradicts itself in five different ways."

"They're so much of a supply chain risk that they're mandating everyone else stop working with them, but we're still going to use them in the bombing of Iran over the weekend," he continues. "And we're going to keep them around for six months."

The quick deal with OpenAI raises red flags. "It's very surprising to me that OpenAI and the government could arrive at terms on a contract so quickly," Roetzer says. "This is the government. Nothing in the government contractually gets done in 12 hours."

But the democracy argument cuts both ways. Palmer Luckey, founder of defense tech company Anduril, raised a point that deserves serious consideration: One company and one CEO effectively telling elected officials what they can and can't do with AI sets its own dangerous precedent. Roetzer acknowledges this tension. "You can say that Anthropic is making a moral, ethical stand here," he says. "The precedent it sets is that one company and in essence one CEO gets to tell the elected officials of our democracy what they can and can't do, what is and is not legal. And that I think actually gets to the heart of what is actually going on here. There's this slippery slope."

The real sticking point may be darker than the headlines suggest. The Pentagon didn't just want Claude for battlefield applications. The government wanted to use it to analyze data collected from Americans: search histories, GPS movements, chatbot queries, credit card transactions. That's not an AI debate. That's a policy debate.

The big money connection is impossible to ignore. Anthropic raised $30 billion in February, co-led by Peter Thiel's Founders Fund. Thiel is widely credited with putting JD Vance in office. Palantir, Thiel's defense tech company, operates in the same government AI space. The connections between the investors funding Anthropic and the political figures threatening it are direct.

"Follow the money," Roetzer says. "I don't see any way that Thiel isn't talking to Vance and Vance isn't talking to Trump and Hegseth, and somehow you make it go away."

— Paul Roetzer, founder and CEO of SmarterX

The government as aggressive regulator of AI? Former Trump AI advisor Dean Ball weighed in bluntly: the United States federal government is now, by an extremely wide margin, the most aggressive regulator of AI in the world. Not through legislation but through executive action against individual companies.

SmarterX Take

The dispute highlights a question the AI industry has been avoiding: Who gets to decide what AI can and can't do. This is now playing out in real time between the government and an influential AI company.

Both sides have legitimate claims. Anthropic has every right to set safety boundaries on its own technology. The government has legitimate national security interests. The problem is that neither side has a framework for resolving the tension. The administration's approach of blacklisting and publicly pressuring Anthropic doesn't square as a sound strategy.

The impact could outlast the policy fight. "If you care about safety, they already were the place to be," Roetzer says of Anthropic. "They're probably getting flooded with resumes from top researchers." On the enterprise side, there are also considerations to keep in mind:

"If you're a massive enterprise in a highly regulated industry and you now know the only company for two years that the government trusted with classified settings was Anthropic, who are you going to trust?"

What to Watch

  • The court challenge. Anthropic said it would take the Pentagon to court. If it follows through, this becomes the first major legal test of whether the government can force an AI company to remove safety restrictions on its technology.
  • The Thiel-Vance back channel. Peter Thiel co-led Anthropic’s $30 billion round and is widely credited with putting JD Vance in office. If a deal materializes quietly, the money trail will tell you more than the press releases.
  • Whether OpenAI’s deal survives scrutiny on its timeline and terms. A government contract finalized in roughly 12 hours raises serious questions. If the terms look pre-negotiated, the blacklisting looks less like policy and more like leverage.
  • Ultimately, watch for a deal. “What this administration does is they take extreme positions as negotiating ploys,” says Roetzer. “They take the most extreme position, do the most extreme thing, say the most extreme thing, all to get you to meet somewhere in the middle or closer to their end game.”

Resources

A Timeline of the Anthropic-Pentagon Dispute → techpolicy.press

Anthropic, Pentagon, and the Claude standoff → axios.com

Trump Administration Trying to Make an Example of Anthropic → americanprogress.org

Not Divided: Open Letter Supporting Anthropic → notdivided.org

Anthropic Responsible Scaling Policy v3 → anthropic.com

Anthropic to Take Pentagon to Court → axios.com

Heard on The Artificial Intelligence Show, Episode 200
Paul Roetzer and Mike Kaput break down the Anthropic-Pentagon showdown: the timeline, the contradictions, the money trails, and why this is likely the most important AI policy story of the year. Listen →

Related Posts

OpenAI Just Made ChatGPT Your Personal Health Assistant

Mike Kaput | January 13, 2026

OpenAI watched as tens of millions of users asked ChatGPT about their health and wellness. Now, they've created a personal health assistant to help them.

Anthropic Just Launched Claude Cowork. It's Already Raising Red Flags

Mike Kaput | January 20, 2026

Anthropic released a new coding tool called Claude Cowork but cautioned users against pitfalls: It can access internal files and accidentally delete them.

OpenAI’s Frontier Platform is Where AI Agents Become Your Coworkers

Mike Kaput | February 10, 2026

OpenAI just made its boldest enterprise move yet with the launch of Frontier, a new end-to-end platform for building, deploying, and managing AI agents.