HomeTechAnthropic Faces Sweeping Federal Cutoff as Treasury, State and Housing Agencies Drop...

Anthropic Faces Sweeping Federal Cutoff as Treasury, State and Housing Agencies Drop Claude; StateChat Moves to OpenAI

WASHINGTON — The Treasury Department, the State Department and the Federal Housing Finance Agency began phasing out Anthropic’s Claude AI assistant after President Donald Trump ordered federal agencies to cut ties with the company, March 3, 2026.

The shifts, including the State Department’s plan to run its internal chatbot StateChat on OpenAI’s GPT-4.1, widen a crackdown that began with a Pentagon dispute over whether Claude could be used without limits for domestic surveillance and fully autonomous weapons.

Anthropic cutoff spreads beyond the Pentagon

Agency notices and internal messages reviewed in a Reuters report showed the departments of Treasury, State and Health and Human Services moving Monday to stop using Anthropic products, joining the Defense Department’s earlier shift away from Claude.

Treasury Secretary Scott Bessent said in a post on X that the department was “terminating all use of Anthropic products, including Claude,” while HHS urged employees to use alternatives such as ChatGPT and Google’s Gemini, according to the report. The Federal Housing Finance Agency’s director, William Pulte, said the regulator and mortgage giants Fannie Mae and Freddie Mac were also ending use of Anthropic products.

The widening pullback follows Trump’s directive that agencies phase out Anthropic technology after a standoff with the Pentagon over AI safety safeguards, as described by The Associated Press. The order calls for most agencies to stop using Anthropic’s tools immediately, while the Pentagon was given a six-month window to unwind systems already embedded in military platforms, the AP reported.

Defense Secretary Pete Hegseth also labeled Anthropic a “supply chain risk,” a designation typically used to isolate foreign adversaries from defense work. In coverage from The Verge, Hegseth said the label would bar military contractors from doing business with Anthropic, a stance the company has disputed as overbroad and unsupported by statute.

Anthropic CEO Dario Amodei has argued that the company’s red lines are narrow and rooted in constitutional and safety concerns. In a Feb. 26 statement, Amodei said Anthropic is willing to support national security work but will not remove contractual safeguards against “mass domestic surveillance” and “fully autonomous weapons,” and he said the company would challenge any supply chain risk designation in court.

StateChat moves to OpenAI

The State Department’s switch is one of the most visible changes because it involves a widely used internal tool. A memo cited by Reuters said: “For now, StateChat will use GPT4.1 from OpenAI,” and the department said more information would follow.

A State Department spokesperson, Tommy Pigott, told Reuters the department was taking “immediate steps” to comply with the president’s directive and bring its programs “into full compliance.” Officials have not publicly detailed how long it will take to migrate projects that were built around Claude’s interfaces and safety tooling, or what data retention and auditing rules will apply across replacement systems.

HHS’ internal note urging staff to shift to other platforms underscored a key near-term challenge for agencies: finding approved alternatives that satisfy procurement, privacy and security requirements while keeping productivity pilots alive. In practice, many offices have relied on a mix of vendor chat tools and internally hosted models, often with different controls for record-keeping, sensitive information and user logging.

A fast reversal after the government’s earlier Claude embrace

The speed of the cutoff marks a sharp break from the federal government’s recent trajectory. In 2024, State Department leaders said employees were asking for an internal chatbot to help streamline tasks such as translation and summarization, according to FedScoop’s April 2024 report.

By mid-2025, State was expanding the tool’s footprint into sensitive administrative workflows. A State Department cable reviewed in a June 2025 Reuters report said StateChat would be used to help assemble candidate lists for Foreign Service Selection Boards, while emphasizing that evaluations themselves would not be performed by AI.

Anthropic, meanwhile, had been pushing deeper into the national security market well before the current dispute. In late 2024, the company teamed up with Palantir and Amazon Web Services to make Claude models available inside defense-accredited environments, as outlined by TechCrunch.

And in August 2025, the General Services Administration touted a governmentwide “OneGov” agreement aimed at making Claude for Enterprise and Claude for Government available across all three branches for a nominal fee, according to a GSA news release.

Now, agencies that adopted Claude through pilots, enterprise licenses or procurement vehicles are reassessing those deployments under the new directive. For federal contractors, the uncertainty is heightened by Hegseth’s supply chain risk language, which could force companies to choose between DoD work and continued use of Anthropic tools — a conflict Anthropic says the government cannot legally impose outside defense contracts.

In the weeks ahead, the practical question for agencies will be less about model preference and more about governance: which systems will be authorized, how outputs will be audited and stored, and whether employees will be allowed to use multiple competing chat tools depending on mission needs. Anthropic has signaled it will fight the designation, setting up a legal and policy clash that could shape how Washington treats “frontier” AI vendors far beyond this one contract dispute.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular