Statement from Dario Amodei on our discussions with the Department of War

A statement from our CEO on national security uses of AI

**Anthropic Draws a Line in the Sand Over Military AI Use**

There’s a moment every tech company eventually faces. Growth or guardrails. Revenue or restraint. And this week, Anthropic’s CEO Dario Amodei made it clear where his company stands.

In a public statement, which you can read in full here: https://www.anthropic.com/news/statement-department-of-war, Amodei outlined Anthropic’s ongoing work with the US Department of War and intelligence agencies, and the tension that’s now surfaced between capability and control.

First, some context.

Anthropic has been deeply embedded in US national security infrastructure. According to Amodei, they were the first frontier AI company to deploy models inside classified government networks, the first at National Laboratories, and the first to offer custom AI systems for national security missions. Claude is already used for *intelligence analysis, operational planning, cyber operations, modeling and simulation*, and more.

So this isn’t theoretical. It’s operational.

But here’s where things get complicated.

Anthropic has refused to remove two specific safeguards from its AI systems, even after pressure from the Department of War. The Department has reportedly stated it will only work with companies that allow “any lawful use” of their AI and has threatened to label Anthropic a “supply chain risk” if it does not comply. At the same time, officials have suggested Claude is essential to national security.

It’s a strange contradiction. Essential, yet a risk.

Amodei emphasizes something important: **Anthropic does not make military decisions. Governments do.** Still, the company believes there are narrow use cases where AI could undermine democratic values or operate beyond what current systems can safely handle. And those are lines they won’t cross.

There’s also a broader geopolitical layer here. Anthropic says it has already sacrificed hundreds of millions in revenue by cutting off access to firms linked to the Chinese Communist Party and supporting export controls on advanced chips.

If you’ve been watching AI evolve over the past few years, you can feel the shift. We’re no longer just debating what AI *can* do. We’re deciding what it *should* do.

And that conversation is only getting started.

Kommentar abschicken