The Straight Answer
Yes, your conversations with AI chatbots can be protected by legal privilege in the UK—but only if you use them the right way.
Privilege isn't about the technology. It's about why you're using it and who sees it. If you're using ChatGPT, Gemini, Anthropic Claude, or Caira to prepare for legal advice or litigation, and you keep those chats private, UK courts may protect them from disclosure.
The 2026 High Court ruling in Aabar Holdings SARL & Ors v Glencore PLC & Ors [2026] EWHC 877 (Comm) confirmed something important: even drafts and notes you never send to a solicitor can be privileged, as long as their main purpose is seeking legal advice. That principle applies just as much to a conversation with a large language model as it does to a handwritten note on your desk.
What Is Legal Privilege—and Why Should You Care?
Legal privilege is one of the oldest and most powerful protections in UK law. In simple terms, it means that certain private communications cannot be forced into the open—not by a court, not by the police, not by the other side in a lawsuit. It exists so that people can speak honestly when seeking legal advice or preparing for legal proceedings, without fear that their words will be turned against them.
There are two main types:
Legal advice privilege protects confidential communications between you and your lawyer (or material created for the purpose of getting legal advice)
Litigation privilege protects documents and communications created for the main purpose of actual or anticipated legal proceedings
If something is privileged, it's shielded. No one can compel you to hand it over. That's why the question of whether your AI conversations are privileged matters so much—because if they're not, everything you typed could end up in a courtroom.
Why This Matters for You
AI tools are transforming how people handle legal issues. Whether you're:
Drafting a letter to your landlord
Preparing a contract for your business
Summarising complex legal advice from your solicitor
Organising evidence for a workplace dispute
...AI can save you time, money, and stress. Millions of people already use ChatGPT, Gemini, Claude, and Caira for exactly these purposes every day. The benefits are real and significant.
But if your AI chats aren't privileged, they could be disclosed in court proceedings, handed over during litigation, or accessed by someone you didn't intend. The good news? You don't have to choose between using AI and protecting your legal position. You just need to understand two simple rules.
The Two Rules for Privileged AI Conversations
Rule 1: Purpose Matters
Your chat needs to be mainly about getting legal advice or preparing for legal proceedings. Courts call this the "dominant purpose test." It doesn't have to be exclusively about legal advice—it just needs to be the main reason.
Privileged: "Help me draft a response to my solicitor's letter about my employment dispute"
Privileged: "Summarise this contract clause so I can discuss it with my lawyer"
Not privileged: A general chat about something unrelated to legal advice or proceedings
Rule 2: Keep It Confidential
Privilege only protects communications that stay private. The moment you share your AI chats widely—posting them online, emailing them to people outside your legal team, or leaving them accessible to third parties—privilege can be lost.
Privileged: You keep the chat private and only share it with your solicitor
Not privileged: You post your AI chat on social media or forward it to people who aren't part of your legal team
These two rules—purpose and confidentiality—are the foundation. Everything else builds on them.
Real-World Scenarios
Scenario 1: Tenant Preparing a Deposit Claim
Sarah is in a dispute with her landlord over her tenancy deposit. She uses Caira to draft a letter of claim, research her legal position, and prepare questions for her solicitor. She keeps the chat private on her personal device and marks it "prepared for legal advice."
Result: Likely privileged. Sarah's dominant purpose is legal advice, and she's maintained confidentiality. Under the principles confirmed in Aabar Holdings v Glencore, her preparatory drafts are protected even though she hasn't sent them to a solicitor yet.
Scenario 2: Business Owner Sharing Too Widely
James uses ChatGPT to brainstorm contract terms for a supplier deal. He generates some useful clauses, but then shares the entire chat log with his team, his accountant, and two suppliers to get their feedback.
Result: Not privileged. By sharing the chat with people outside his legal team, James has broken confidentiality. Even though the original purpose may have been legal, the wide distribution means a court would likely find privilege has been waived.
Scenario 3: Self-Representing Litigant
Maya is representing herself in a county court claim. She uses Claude to prepare her court bundle, draft witness statements, and organise a timeline of events. She keeps detailed notes marked "for litigation" and doesn't share them with anyone until she files them with the court.
Result: Likely privileged. You don't need a solicitor for privilege to apply. Maya's dominant purpose is litigation, and she's kept everything confidential. The court would recognise her right to a "safe space" to prepare her case.
Scenario 4: Employee in a Workplace Grievance
This is the scenario people don't think about until it's too late. Tom is in a grievance dispute with his employer. During his lunch break, he uses Gemini on his work laptop—logged into his work account—to research his legal options, draft a grievance letter, and ask about constructive dismissal.
His employer's IT policy gives them the right to monitor and access all activity on company devices. Three weeks later, Tom's employer produces his AI chat logs as part of the grievance proceedings.
Result: Privilege is seriously at risk. Even though Tom's purpose was legal advice, he used a device and account his employer controls and can access. Confidentiality—the second rule—is compromised. If your employer can see your chats, they're not truly private.
The lesson: If you're in a dispute with your employer, use your personal device, your personal account, and a platform your employer has no access to. This is one of the most common mistakes people make, and it can cost you the protection that privilege would otherwise provide.
How the Law Got Here: The Key Rulings
The rules around AI and legal privilege didn't appear overnight. They're built on decades of case law that courts are now applying to the digital age.
Three Rivers (No 5) [2003]
This House of Lords decision originally narrowed legal advice privilege. It held that, in a corporate context, only communications between the lawyer and those specifically authorised to seek legal advice on behalf of the company were privileged. Internal documents passing between employees—even if they were preparing material for the lawyer—were not protected.
For years, this created a gap. Companies worried that their internal preparation for legal advice was exposed. Individuals wondered whether their own notes and drafts were safe.
SFO v ENRC [2018]
The Court of Appeal pushed back against the narrow Three Rivers approach, particularly regarding litigation privilege. It held that investigatory work carried out by lawyers—interviews, document reviews, briefings—could attract privilege if the dominant purpose was preparing for litigation. This signalled a more practical, broader interpretation.
Jet2.com and Sadeq v Dechert LLP
These cases further clarified that legal advice privilege covers not just the final letter to your solicitor, but the process of seeking advice. Notes, drafts, summaries, and internal discussions can all be privileged if they're part of that process and kept confidential.
Aabar Holdings SARL & Ors v Glencore PLC & Ors [2026] EWHC 877 (Comm)
This is the ruling that changed the landscape. The High Court confirmed that intra-client documents—internal drafts, memos, and notes prepared within the client's own team—can be privileged even if they were never actually sent to a lawyer.
The court held that there is no logical distinction between a document intended to be sent to a lawyer and one that simply contains information meant for the lawyer. Both attract privilege if the dominant purpose is legal advice.
What does this mean for AI? If you use ChatGPT, Gemini, Claude, or Caira to prepare material for legal advice—drafting arguments, clarifying facts, organising your thoughts—that material can be privileged under the Aabar Holdings principle. The AI chat is an intra-client document: it's your own preparatory work, created for the purpose of seeking legal advice.
Munir v Secretary of State for the Home Department [2026]
The UK Upper Tribunal sounded a warning in this case. It stated that pasting confidential legal material into a public AI tool could amount to placing it "in the public domain." This doesn't mean you can't use AI—it means you should be thoughtful about which platform you use for sensitive legal material, and whether the platform genuinely keeps your data confidential.
Does the AI Platform Matter?
Not as much as you'd think, but it's worth understanding the difference.
Whether you use ChatGPT, Gemini, Claude, or Caira, the same two rules apply: purpose and confidentiality. No platform automatically makes your chats privileged, and no platform automatically destroys privilege.
That said, privacy-focused platforms give you an extra layer of protection. Caira, for example, is built with a privacy-first approach specifically for legal work. Unwildered (the team behind Caira) doesn't use your data for model training, doesn't have third-party moderators reviewing your conversations, and doesn't share your information with anyone. Your legal conversations stay completely private. You can read the full details in Caira's Terms & Conditions and Privacy Policy at unwildered.co.uk.
Public versions of other AI tools sometimes have terms of service that allow data review or training use. For most everyday legal tasks, this isn't a practical concern—but for highly sensitive matters (major commercial disputes, criminal defence, regulatory investigations), using a privacy-focused platform like Caira removes any doubt about confidentiality.
The important point: don't let platform choice stop you from using AI. Choose the tool that works for your situation. If the stakes are high, lean towards privacy-focused platforms like Caira.
What If the Police Seize My Phone?
This is a question that comes up more than you'd expect. If law enforcement confiscates your device and finds your AI chat logs, can you claim privilege?
Yes, potentially. But you'll need to demonstrate:
Purpose: The chats were created for legal advice or litigation
Confidentiality: You kept them private and didn't share them widely
Evidence of intent: Ideally, the chats themselves contain indicators—"I'm preparing this for my solicitor," "this is for my court case"—that show their purpose
The Aabar Holdings ruling supports the view that preparatory documents, including digital ones, are privileged if the dominant purpose test is met. But the burden is on you to show the court why privilege applies.
Practical tip: If you're using AI for legal preparation, add a line at the start of your chat like "This conversation is for the purpose of seeking legal advice" or "Preparing for litigation." It takes two seconds and can make all the difference if your privilege is ever challenged.
Can You Make AI Chats Privileged After the Fact?
No. This is a common misconception. You cannot brainstorm freely with an AI chatbot and then "make it privileged" by forwarding the chat to your solicitor.
In the 2026 US case United States v. Heppner, a defendant used Anthropic Claude to draft his criminal defence strategy, generating 31 documents. He later shared them with his lawyer. The court ruled the chats were not privileged because Claude isn't a lawyer, and the defendant used it voluntarily—the AI was treated as a third party.
UK courts would likely follow the same logic. The lesson is straightforward: if you want privilege, start with that intention. Create your AI chats for the purpose of legal advice from the beginning, and keep them confidential throughout.
Practical Tips: Protecting Your AI Conversations
Mark your chats: Add "Prepared for legal advice" or "For litigation" at the start of any AI conversation about legal matters
Keep them private: Don't share with anyone except your legal team
Use personal devices for personal disputes: If you're in a dispute with your employer, never use work devices or accounts
Be clear about purpose: Use AI specifically for legal advice or court preparation
Use privacy-focused tools for sensitive matters: Caira is built specifically for legal work with strong confidentiality guarantees—no third-party review, no training on your data, complete privacy
Review with a solicitor: AI is an excellent starting point for preparation, research, and drafting—but for important matters, get professional advice
FAQ
Q: Are my AI chats always privileged?
No. Privilege depends on your purpose (legal advice or litigation) and whether you kept the chats confidential.
Q: Do I need a solicitor for privilege to apply?
No. Litigants in person have the same privilege rights. If you're self-representing and preparing for court, your AI drafts can be privileged.
Q: Can I use AI to prepare for court?
Absolutely. AI tools are excellent for drafting, organising evidence, and understanding legal concepts. Keep your work private and purposeful.
Q: What if I used my work computer to chat with AI about a dispute with my employer?
This is risky. If your employer can access your chat logs through IT monitoring or account access, confidentiality is compromised and privilege may be lost. Use a personal device and account instead.
Q: Is it safe to use public AI platforms for legal matters?
For most purposes, yes. The benefits of AI are substantial. For highly sensitive matters, consider privacy-focused platforms like Caira for added confidence.
Q: Can the other side in a lawsuit request my AI chat logs?
Yes, AI chats are considered "documents" under UK civil procedure rules. They can be requested during disclosure—unless they're privileged.
The Bottom Line
AI tools like ChatGPT, Gemini, Anthropic Claude, and Caira are powerful allies for legal work. They help you draft, research, organise, and understand complex issues—making the law more accessible and less intimidating for everyone, whether you're a consumer, a business owner, or a self-representing litigant.
Legal privilege can protect your AI conversations. The law is clear, and the 2026 Aabar Holdings v Glencore ruling has broadened that protection to cover exactly the kind of preparatory work people do with AI every day.
Two rules. That's all it takes:
Use AI mainly for legal advice or litigation preparation
Keep your chats confidential
Don't let uncertainty about privilege stop you from using AI. Use it smartly, use it purposefully, and when the stakes are high, mark your work and keep it private.
The law is catching up with technology. You can too.
Disclaimer: This article is general information, not legal advice.
