The Heppner Wake-Up Call: Why Your Client’s Chatbot Conversations are the New Discovery Nightmare

April 18, 2026
6 min read
On February 17, 2026, the Southern District of New York (SDNY) issued a ruling that serves as a definitive ethical and malpractice advisory for the modern era. In United States v. Heppner, Judge Jed S. Rakoff addressed a question of first impression: Are a client’s secret logs with a generative AI platform protected by privilege? […]

On February 17, 2026, the Southern District of New York (SDNY) issued a ruling that serves as a definitive ethical and malpractice advisory for the modern era. In United States v. Heppner, Judge Jed S. Rakoff addressed a question of first impression: Are a client’s secret logs with a generative AI platform protected by privilege?

Consider the ethical implications when a client, acting without your knowledge, creates a discoverable roadmap for the prosecution. In this case, Federal Bureau of Investigation (FBI) agents executed a search warrant at Bradley Heppner’s home, seizing various electronic devices and hardware. It was only during the subsequent privilege review that Heppner’s counsel identified 31 logs of extensive conversations between the defendant and “Claude,” an AI platform by Anthropic. Heppner had used the tool to draft defense strategy outlines and legal questions after receiving a grand jury subpoena but because he did so without the direction of counsel, the court ruled these logs were fair game for the government.

For solo and small firm practitioners, Heppner is the law of the land, and it establishes that if you aren’t proactively managing your client’s digital habits, you are leaving their defense vulnerable.

1. Your AI is Not a Lawyer (The Privilege Failure)

To determine if the AI logs were protected, the court applied the “three-element” test from United States v. Mejia. Under this standard, a communication is privileged only if it is: (1) between a client and their attorney, (2) intended to be and kept confidential, and (3) for the purpose of obtaining or providing legal advice.

The defense failed at the first step. Claude is not an attorney, and as the court noted, the discussion of legal issues between two non-attorneys is never protected.Attorneys often fall into the dangerous trap of believing that once a client hands over a document or places it in a folder labeled “Work Product” it magically acquires protection. Judge Rakoff was clear: non-privileged communications are not “alchemically” changed into privileged ones just because they are later shared with counsel. Because these documents were “born” public in the client’s hands, they remained public. You cannot retroactively shield data that was created outside the protected attorney-client circle.

2. The Privacy Policy is a Privilege Waiver

The second prong of the Mejia test was destroyed by the very terms of service your clients likely click “Accept” on without reading. Anthropic’s privacy policy explicitly states that it collects data to “train” its models and reserves the right to disclose that data to third parties, including governmental regulatory authorities.

Judge Rakoff emphasized that the lack of a human professional relationship is fatal to a claim of privilege:

“[R]ecognized privileges” require, among other things, “a trusting human relationship,” such as, in the attorney-client context, a relationship “with a licensed professional who owes fiduciary duties and is subject to discipline.” No such relationship exists, or could exist, between an AI user and a platform such as Claude.

We must advise our clients that clicking “Accept” on a standard consumer AI Terms of Service (ToS) is legally equivalent to inviting a Department of Justice agent to sit in on a client meeting. By using “Open” or “Consumer” AI tools, the client is voluntarily disclosing their thoughts to a third party that disclaims any duty of confidentiality.

3. The “Nuclear Option”: Waiver of Underlying Communications

The most devastating aspect of the Heppner ruling is found in the court’s footnote 3. Judge Rakoff suggested that by feeding privileged information into Claude, the defendant likely waived privilege over the underlying communications themselves.

This is the “nuclear option” of discovery. It isn’t just the AI log that becomes discoverable; the original, sensitive conversation between you and your client regarding that same topic may now be stripped of its protection because the client shared the “substance” of that advice with a third-party machine.

4. The “Volition” Gap in Work Product Protection

The defense’s attempt to use the Work Product Doctrine failed because of the “volition” gap. The court noted the documents were not prepared “at the behest of counsel” but were created by Heppner on his own initiative.

Judge Rakoff explicitly addressed and rejected the reasoning in Shih v. Petal Card, Inc., a case where a magistrate judge allowed a client to withhold communications prepared in anticipation of litigation without attorney direction. Rakoff argued that Shih “undermines the policy animating the work product doctrine,” which is intended to protect the lawyer’s mental processes.

Small firm clients often try to “help” by doing their own research. In the AI era, this creates a “double-whammy”: the client loses the privilege and simultaneously creates a discoverable roadmap of their own anxieties, legal theories, and factual admissions. If the attorney didn’t pull the trigger on the search, they cannot protect the results.

5. Tech Competence as a Mandatory Duty (ABA Rules 1.1 & 1.4)

For the solo practitioner, the duty of competence under ABA Model Rule 1.1 (Comment 8) effectively transforms the individual attorney into the “Chief Information Security Officer” (CISO) for their clients. If you are not auditing your client’s technology use, you are failing your duty to communicate risks under Rule 1.4.

Practical Steps for Small Firms:

  • Update Engagement Letters: Include a clause explicitly prohibiting the use of generative AI for matter-related research or organization without written counsel consent.
  • Intake Disclosures: Require clients to list any AI tools (ChatGPT, Claude, Gemini) they used to research their matter before hiring you.
  • The Kovel Safeguard: If you must use AI, ensure you are the one directing the session. This allows you to argue the Kovel doctrine, treating the AI as a non-lawyer agent (like an accountant or translator) necessary for the rendering of legal advice.

6. The Shifting Data Privacy Landscape

The Heppner ruling relies on a business-centric, contract-heavy US view of privacy: if you signed the ToS, you waived the right. This is increasingly at odds with the EU’s “human rights” model, which views data privacy as inherent to the individual, regardless of a “click-wrap” agreement.

As AI governance evolves, we may see a shift toward the human rights model, but currently, US practitioners are operating in a legal minefield. As long as these platforms view user data as a commodity for training, consumer-grade AI will remain a legal landmine that can detonate a client’s entire defense.

Conclusion: The Future of “Silicon Agents”

While commentators argue the Heppner ruling is too categorical and fails to treat AI like other cloud tools (Anthropic Claude or Cowork, Microsoft CoPilot, OpenAi ChaGPT), it remains the current standard. We are moving toward a reality where the Kovel doctrine may eventually be forced to recognize “silicon agents,” but we are not there yet.

In a world where 800 million people use AI weekly, you can no longer assume your client’s “private” notes are actually private. We must ask ourselves: Are we prepared for a world where the most dangerous witness against your client is the chatbot in their pocket? Begin your next intake by assuming they’ve already talked to one.

Try CareMyCase AI Ethics Engine.

This article is for informational purposes only and does not constitute legal advice.