WASHINGTON — Anthropic’s lawsuit against the Trump administration reached a critical new stage Tuesday when the Justice Department urged a federal judge in San Francisco to keep the Pentagon’s blacklisting of the AI company in place while the case moves forward, March 18, 2026.
The fight now goes well beyond one contractor dispute. It asks whether the government can use national-security procurement powers to shut out an AI supplier that refused two uses it says are unsafe: autonomous weapons without human oversight and mass surveillance of Americans.
In the Justice Department’s opposition brief, government lawyers said Anthropic’s refusal to accept the Pentagon’s standard “all lawful use” term was “conduct, not protected speech” and argued the company posed a supply-chain risk because its privileged access to the model could let it interfere with sensitive systems. Anthropic’s March 9 complaint says the designation was unprecedented, poorly explained and unconstitutional, and that a related challenge in Washington targets a separate authority that could widen the exclusion across government.
Why the Anthropic lawsuit matters beyond one contract
The case could determine how much leverage Washington can exert over major AI vendors once their software is embedded in national-security work. Anthropic is not arguing that it should be excluded from military work altogether; the company says it was willing to support intelligence analysis, cyber operations, operational planning and other sensitive uses, but not the two categories it considers beyond the limits of current AI reliability or civil-liberties protection.
That position collides with the administration’s view that a vendor cannot supply warfighting systems while also reserving a veto over how the customer may use them. For the White House and Pentagon, the dispute is about control over lawful military operations. For Anthropic, it is about product safety, free speech and whether the government can retaliate against a supplier for stating where its model should stop.
What makes the standoff harder to frame as a clean break is the Pentagon’s own internal posture. A March 6 memo allowing rare exemptions beyond the six-month ramp-down suggested officials recognized that Claude is already threaded through parts of the defense stack and may be difficult to remove quickly from mission-critical systems.
The private sector has noticed. Microsoft, which integrates Anthropic into technology it sells to the U.S. military, backed the company in an amicus brief warning of costly disruptions for suppliers. That support has helped turn the case into a proxy fight over whether AI companies can do serious government work while keeping any meaningful say over the most controversial ways their models are used.
How the Anthropic lawsuit grew out of a courtship
The rupture is striking because Anthropic spent much of the past year building a national-security business. In November 2024, it teamed up with Palantir and AWS to sell Claude to defense customers. By June 2025, it had launched Claude Gov for defense and intelligence agencies, a version designed to work with classified material and other government-specific tasks.
That history makes the current standoff look less like a blanket refusal to work with the state and more like a narrower — but much more consequential — argument over where an AI supplier may still draw hard lines. If Anthropic wins early relief at the March 24 hearing, the court will have signaled skepticism toward using blacklist-style authorities against disfavored safety policies. If the administration wins, “any lawful use” could become the effective price of doing major AI business with the federal government.
Judge Rita F. Lin’s decision will not settle every question in the case. But it should offer the clearest early signal yet on whether the courts see the Pentagon’s move as a national-security judgment entitled to broad deference or as a retaliatory overreach dressed in procurement language.

