HomeTechAI Chats Face Critical Privilege Risk as Contrasting Federal Rulings Trigger New...

AI Chats Face Critical Privilege Risk as Contrasting Federal Rulings Trigger New Law Firm Warnings

NEW YORK — U.S. law firms this week are warning clients that AI chats about active legal matters may become discoverable after a Manhattan federal judge and a Michigan magistrate judge reached different outcomes on privilege and work product in cases involving Anthropic’s Claude and OpenAI’s ChatGPT. The split matters because it suggests protection will turn on facts lawyers can control — whether the system is public or closed, whether counsel directed the work and whether the claim is attorney-client privilege or work product — rather than on any simple rule about AI, April 15, 2026.

The sharper warning came from Judge Jed Rakoff’s Feb. 17 opinion in United States v. Heppner, which required a criminal defendant to turn over 31 Claude-generated documents. Rakoff said no attorney-client relationship existed between the user and the chatbot, that Heppner had no “reasonable expectation of confidentiality” in the exchanges and that the materials were not prepared “at the behest of counsel.”

The counterpoint came in a Feb. 10 order in Warner v. Gilbarco, where Magistrate Judge Anthony Patti refused to force a self-represented plaintiff to hand over AI-assisted drafting material. Patti said ChatGPT and similar systems are “tools, not persons,” and that work-product waiver ordinarily requires disclosure to an adversary or a step likely to put the material in an adversary’s hands.

Why AI chats are now a privilege flashpoint

Taken together, the cases do not create a clean national rule. Heppner involved a represented defendant, a public AI platform and claims to both attorney-client privilege and work product. Warner involved a self-represented litigant, a civil discovery fight and work product rather than classic attorney-client confidentiality. That is why the split is real but narrower than it first appears.

Still, the decisions have pushed firms to change the way they talk to clients about consumer AI. In a Reuters report on the post-Heppner fallout, lawyers said clients are being told not to discuss legal matters with consumer chatbots outside attorney supervision, to favor “closed” enterprise tools and to spell out when AI use is being done at counsel’s direction. A Crowell & Moring client alert urged companies to use non-public systems whose prompts and outputs are not used for training or shared with regulators or third parties.

AI chats in court are not all being treated the same

That nuance is why neither ruling should be overread. Warner is not a blanket holding that all AI chats are protected, and Heppner is not a blanket ban on AI in legal work. What both decisions suggest is that courts will look hard at facts lawyers can control: whether the system is public or closed, whether the vendor’s terms undercut confidentiality, whether counsel directed the work and whether the material reflects attorney strategy or a client’s self-directed brainstorming.

The concern also did not appear out of nowhere this spring. Reuters Practical Law was already warning litigators in a 2023 Q&A on ChatGPT and large language models that discovery, accuracy and ethics problems were likely to follow courtroom use. In 2024, the ABA said in its first ethics guidance on lawyers’ use of AI tools that lawyers must weigh duties of competence, confidentiality, client communication and billing when they use generative AI. Reuters returned to the point in a 2025 analysis on protecting privilege and work product, warning that careless GenAI use could expose protected information to waiver, malpractice claims or sanctions.

The likely near-term result is a more conservative AI playbook: tighter vendor terms, more enterprise deployments, clearer client instructions and much less tolerance for using public chatbots as a scratchpad for facts or legal theories. The rulings do not close the door on AI in legal work, but they do make one point plain: in court, AI chats may be treated less like private notes and more like statements made in a room with the door open.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular