Home Politics Trump orders sweeping Anthropic ban, allowing a six-month phase-out as Pentagon moves...

Trump orders sweeping Anthropic ban, allowing a six-month phase-out as Pentagon moves to label Claude maker a “supply-chain risk”

0
Anthropic

WASHINGTON — President Donald Trump ordered federal agencies to stop using Anthropic’s AI products Friday, while granting the Pentagon six months to phase out the company’s Claude models embedded in defense systems. The directive follows a breakdown in negotiations over Anthropic’s restrictions on military uses of Claude, prompting Defense Secretary Pete Hegseth to move to designate the firm a “supply-chain risk,” Feb. 27, 2026.

Trump announced the order publicly, and Pentagon officials quickly widened the dispute by tying it to procurement rules that can reach deep into the defense contractor ecosystem. The Pentagon’s carve-out acknowledges that Anthropic software is already integrated into some defense workflows, including systems used at classified levels.

What Trump’s Anthropic ban does

Trump said most agencies must “immediately” cease using Anthropic technology, but the Pentagon can keep using Claude for up to six months while it transitions to alternatives, according to an Associated Press report. In the same message, Trump added: “We don’t need it, we don’t want it.”

The phased approach reflects how widely Anthropic’s models have spread across parts of the federal enterprise, and how difficult it can be to unwind a system once it is built into workflows, tools and contracts.

Why the Pentagon calls Anthropic a “supply-chain risk”

Hegseth said the designation means “no contractor, supplier, or partner” doing business with the U.S. military may conduct commercial activity with Anthropic, a move that could force defense contractors to cut ties even if Claude is only one component of a larger toolchain, CBS News reported. Anthropic argues the secretary lacks statutory authority to bar contractors from all commercial dealings beyond Pentagon contract work.

Anthropic has said it will challenge the designation in court, and the dispute centers on two categories of use the company says it will not support: mass domestic surveillance and fully autonomous weapons without human oversight. Anthropic CEO Dario Amodei said the company “cannot in good conscience accede” to Pentagon demands to loosen those restrictions, while Pentagon officials have insisted they want access for “all lawful purposes.” Pentagon spokespeople have said the Defense Department has “no interest” in using AI for domestic surveillance or fully autonomous lethal action, but objected to private guardrails that could limit commanders’ options.

Beyond optics, the label has real procurement consequences — including the potential loss of Anthropic’s roughly $200 million Pentagon contract. Axios explained that “supply-chain risk” designations are typically used to keep adversarial or high-risk technology out of defense procurement; Axios noted the penalty is usually reserved for companies from adversarial countries, such as Chinese tech giant Huawei.

How Anthropic became a government AI supplier

The standoff lands after Anthropic spent the past two years expanding its footprint in Washington. In late 2024, TechCrunch reported that Anthropic teamed with Palantir and Amazon Web Services to bring Claude to defense and intelligence customers, signaling a willingness to sell into sensitive national security environments.

In 2025, Nextgov/FCW reported that Anthropic introduced “Claude Gov” models tailored for national security workloads and designed for adoption inside classified environments. That push continued into civilian agencies: FedScoop reported in December that the Department of Health and Human Services rolled out Claude for Government across the department.

Those moves helped make Anthropic a common name in federal AI procurement — and help explain why the administration’s directive could be disruptive. Cutting off Anthropic products means agencies must not only swap interfaces, but also unwind integrations, workflows and model-based services embedded in day-to-day operations.

What comes next for Anthropic and federal AI procurement

The White House order sets a clock on the Pentagon’s transition — and sets up a likely court fight over how far a “supply-chain risk” designation can reach. Breaking Defense reported that Trump’s six-month phaseout is meant to address how deeply Claude is already “enmeshed” in federal work, even as the Pentagon argues it cannot allow a private company to dictate operational constraints.

The Washington Post noted that Anthropic’s bet on government business has been growing fast — including a Pentagon contract and an offer to supply Claude to civilian agencies for $1 — even as the fight echoes earlier clashes between Silicon Valley and the Pentagon, such as the employee backlash that pushed Google out of the Defense Department’s Project Maven effort years ago.

For now, the six-month phaseout gives defense and civilian agencies time to migrate off Anthropic systems. Longer term, the confrontation is likely to sharpen a question for Congress and federal buyers: whether AI guardrails should be negotiated vendor by vendor — or set through clearer rules that apply across the market.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version