He was a financial services CEO, and he had a problem. Federal charges were coming. His lawyers were working the case, but he had his own questions, the kind you don’t necessarily want to pay $800 an hour to ask. So he opened the free version of Claude and started typing. He analyzed his own legal exposure. He worked through the strategy. Some of what he typed came directly from conversations with his attorneys. It felt, as most AI conversations do, entirely private.
It wasn’t.
In February 2026, a federal judge in New York ordered those documents produced to prosecutors. The CEO had assumed the work was protected. The court disagreed. And the reason it disagreed is the thing every business needs to understand right now.
Seven days before the Heppner ruling, a federal magistrate judge in Michigan reached the opposite conclusion.
The case was Warner v. Gilbarco, Inc., and the facts were almost comically different. The plaintiff wasn’t a sophisticated executive with a legal team. She was a pro se litigant, meaning she had no attorney at all. She had used AI tools to help her think through her own case. The defendants moved to compel production of those AI interactions.
The court denied the motion. Her AI-generated materials reflected her own internal analysis and mental impressions, prepared in anticipation of litigation. Compelling that production, the judge wrote, “would nullify work-product protection in nearly every modern drafting environment.” The work product doctrine held.
Same technology. Two cases decided in the same week. Two completely different outcomes.
In Heppner, Judge Rakoff’s reasoning was methodical. Attorney-client privilege failed for three reasons: Claude is not an attorney, there was no attorney-client relationship between the CEO and Anthropic, and the materials were not prepared at the direction of counsel. He went to Claude on his own, without being asked or directed by his lawyers. That decision cost him.
Work product protection failed for a related reason. That doctrine protects materials prepared “by or at the behest of counsel in anticipation of litigation.” Heppner’s AI sessions satisfied neither requirement. He wasn’t counsel, and no counsel had directed him.
Here is the part that matters most: the consumer version of Claude, the free platform Heppner used, carries terms of service that permit Anthropic to collect user inputs, use them for model training, and disclose data to third parties, including to government authorities. Those same terms helped defeat his privilege claim. The moment he clicked “accept” on the user agreement, he handed prosecutors part of their argument.
In Warner, the work product doctrine survived because the plaintiff’s AI use looked more like note-taking than consulting a third party. Her thought process, her mental impressions, her litigation analysis: all of it stayed protected. The court applied the same doctrine it would have applied to a handwritten legal pad. The tool was different. The principle was not.
Consumer-grade AI, meaning the free or low-cost tiers of ChatGPT, Gemini, and Claude, typically operates under terms that allow the provider to retain your inputs, use them for model training, and share data with third parties. Those terms are not hidden. Most people just don’t read them.
Enterprise-grade AI operates under negotiated commercial agreements: no training on customer data, restricted provider access, enforceable confidentiality obligations, audit logs, and custom data-retention controls. That contractual difference is now case law.
The risk runs well beyond litigation. If your employees are entering deal terms, financial projections, personnel information, or regulatory materials into a consumer AI platform, the exposure may already exist before anyone ever files a lawsuit. Consumer terms can compromise trade secrets, implicate NDA obligations, and surface regulated personal data in ways that are hard to walk back.
For attorneys, the stakes are a layer higher. Entering confidential client information into a consumer AI tool may violate ABA Model Rule 1.6(c), which requires reasonable measures to prevent unauthorized disclosure of information relating to the representation. Heppner made that concern concrete.
We worked through this ourselves before we began using AI tools in our practice. Here is what we are now telling clients.
Know what is going in. Find out what your employees are actually entering into AI tools. Deal terms, client data, financial projections, personnel files, litigation strategy, and regulated personal information each carry different legal obligations. The exposure is proportional to the sensitivity of the information.
Read the platform terms, or bring them to us. The fine print varies significantly across platforms, and the variations matter. Look for provisions covering data retention, training on inputs, human review of prompts, and third-party disclosure. Enterprise agreements should prohibit training on your data, restrict provider access, and impose real confidentiality obligations. Watch for carve-outs covering “anonymized” data or “service improvement” that can hollow out those commitments in practice.
Use enterprise AI for anything sensitive. Consumer plans lack the contractual protections needed to support a privilege argument or satisfy confidentiality obligations. If your business uses AI to work through legal, regulatory, financial, or personnel matters, the consumer tier is not the right tool.
Add technical controls. Work with your IT team to restrict consumer AI platforms on company networks and managed devices. Data-loss-prevention tools can intercept sensitive information before it leaves your environment. No single control is complete, but layered defenses reduce exposure meaningfully.
Train your people. This is not only a legal department problem. Every employee who handles confidential information should understand the line between consumer and enterprise AI, and should know that their AI conversations can end up in discovery. Build that into your existing security training.
Ask about prior AI use when a matter arises. At the outset of any legal, regulatory, or compliance engagement, ask whether anyone used AI tools to analyze or discuss the issue. Prior consumer AI use may have compromised privilege before the matter ever reached us. We ask this question routinely now, and we recommend you build it into your own intake protocols.
Update your document preservation policies. AI interactions are electronically stored information subject to preservation and discovery obligations. Chat logs, uploaded documents, exported summaries, and AI-generated analyses should all be covered by your litigation hold notices.
At Lyons Gaddis, we use AI tools, and we have been deliberate about which tools, under what terms, and for what purposes. Before we adopted any AI in our practice, we reviewed the platform terms the same way we are now advising clients to do. We are transparent with clients about how we use it, and we are confident our practices align with our obligations under the rules of professional conduct.
The lesson from Heppner and Warner is not that AI is dangerous. It is that the tool does exactly what its terms of service say it will do. Nobody read the fine print, and a CEO handed federal prosecutors their argument at the moment he accepted a user agreement.
Your AI tool doesn’t keep secrets unless you’ve specifically contracted for it to do so.
If you want to talk through how this applies to your situation, reach out. We are glad to help you figure out where you stand before someone else figures it out for you.
Lyons Gaddis is proud to announce that attorney Jennifer Spitz was included on the 2026…
Two Lyons Gaddis attorneys in Longmont are stepping into an unusual role this week, not…
Keith brings experience advising clients in complex civil disputes, with a focus on construction litigation,…
Today’s real estate market is testing conventional playbooks. Shifting capital conditions and heightened pressure on…
Dan brings a strong background in education law and regulatory adjudication, having most recently served…
By Sean Stewart Shareholder, Lyons Gaddis On February 1, 2026, Colorado will put into effect…