Public consultation – Liability for AI Harms under the private law of England and Wales

The Consultation: What's Happening?

The Master of the Rolls, Sir Geoffrey Vos – the second most senior judge in England and Wales – is leading a major initiative to clarify who is liable when AI causes harm.

The question is straightforward: if you use an AI tool, chatbot, or LLM to draft a contract and something goes wrong, who's responsible? Is it you, the AI company, or someone else?

The UK Jurisdiction Taskforce (UKJT) has published a draft Legal Statement that will guide how courts approach these cases for years to come. They invited public feedback, and we were keen to contribute.

Why This Consultation Matters

The UKJT's Legal Statement is designed to guide judges, lawyers, and policymakers as they navigate new territory. It's similar to a medical consensus statement: not law in itself, but a trusted reference point for anyone making decisions about AI and liability.

When a case involving an AI chatbot or LLM comes to court, the judge will almost certainly consult this statement. Even though it won't be an Act of Parliament, it will shape outcomes for years – perhaps decades. That's why it was so important for us to contribute.

Our Position: Users Are the Decision-Makers

At Unwildered, we see AI as a tool to support—not replace—human judgment. Just as a calculator doesn't make you bad at maths, an AI drafting assistant doesn't absolve you of responsibility.

Our core argument: When you use Caira to draft a document, you are the author. You review it, you edit it, you sign it. The AI assists; you decide.

This mirrors how tax law already works. Under Schedule 24 of the Finance Act 2007, taxpayers are liable for "careless" mistakes on their tax returns—even if they relied on incorrect advice from an accountant. The Angela Rayner stamp duty controversy in 2025 showed this in action: relying on advice alone wasn't enough. She was found not to have met the highest standards because she failed to verify the advice she received.

The same logic should apply to AI. If you sign a contract without reading it, you can't blame the AI. The AI is your drafting assistant, not your lawyer.

Understanding "Human in the Loop"

You may have heard the phrase "human in the loop" in discussions about AI safety. It means that, even when using AI, a person is always responsible for the final decision.

It's not about letting the AI run wild – it's about empowering users to make informed choices, just as they would with any other tool. When you use Caira:

  • You provide the facts and context

  • You review the AI's output

  • You decide whether to act on it

  • You take responsibility for your decision

This is exactly how it works with solicitors too. Your lawyer drafts a document; you sign it. You're not absolved of responsibility just because a professional helped you.

For consumer AI tools like Caira, the user is the human in the loop. Requiring a lawyer to review every AI output isn't feasible – there are 150,000 solicitors in England & Wales and 67 million people.

Why This Matters for Consumers

The reality is that 31% of UK adults have unmet legal needs. Many can't afford solicitors, struggle to get appointments, and are left to figure things out alone.

AI tools like Caira change this. For £15 a month, you get legal answers and document drafting 24/7, 365 days a year – including evenings and weekends when your thoughts are loudest and solicitors are unavailable.

If the law imposes the same liability standards on a £15/month AI tool as on a £500/hour solicitor, those affordable options will disappear. The choice for consumers isn't between "AI" and "a solicitor." It's between "AI" and "nothing at all."

We urged the Taskforce to recognise this reality.

The Difference Between AI Tools and Regulated Professionals

Some might ask: shouldn't AI be held to the same standards as solicitors?

Solicitors are regulated, insured, and subject to strict professional standards. They train for years, carry mandatory indemnity insurance, and can be struck off for misconduct.

AI tools, by contrast, are software products – available instantly, at a fraction of the cost, and without the same regulatory framework. They're governed by consumer protection law (like the Consumer Rights Act 2015), not professional conduct rules.

If the law treats both the same, it risks making affordable AI tools unviable, leaving many with no help at all.

This doesn't mean AI should be a free-for-all. At Unwildered, we're committed to responsible innovation:

  • We use clear disclaimers that our output is informational, not legal advice

  • We require user confirmation before important actions

  • We follow best practices for data security

  • We believe AI should empower, not replace, human judgment

But the liability framework must recognise that a £15/month tool serving millions is fundamentally different from a bespoke £500/hour solicitor relationship.

A Message to Lawyers and Stakeholders

For lawyers reading this: this isn't about replacing your expertise. It's about making legal support more widely available.

AI can handle routine drafting and research, freeing up solicitors to focus on complex, high-value work that genuinely requires professional judgment. The 31% with unmet legal needs aren't your current clients – they can't afford to be. AI serves a market that traditional legal services simply cannot reach.

For regulators, it's about striking the right balance: protecting consumers without stifling innovation. The Consumer Rights Act 2015 already provides robust protections for digital content. We don't need to reinvent the wheel – we need to apply existing principles sensibly.

The Risk of Overregulation

If the UK makes it too risky to offer AI legal tools, consumers won't suddenly afford solicitors. They'll turn to ChatGPT – an American product not designed for England & Wales, not trained specifically on UK law, and not subject to UK oversight.

If regulation is too strict, UK consumers may end up relying on overseas AI tools that don't understand our legal system, can't offer local protections, and aren't accountable to UK standards. That's a risk for everyone – consumers, lawyers, and the justice system.

Overregulation doesn't protect consumers. It pushes them toward less safe alternatives.

As Sir Geoffrey Vos himself noted in the consultation paper:

"Any uncertainty, genuine or perceived, risks inhibiting the adoption of beneficial AI tools, particularly amongst risk-averse professional sectors..."

And further:

"English law, as a well-developed flexible common law system, has the ability to provide certainty and predictability in the context of technological innovation. In areas of true novelty, certainty and predictability emerge over time as Courts reason from first principles and develop existing rules to address the novelty in question."

We agree. The law must enable responsible innovation, not stifle it.

Beyond Law: The Bigger Picture

The same challenges exist in healthcare, mental health, and financial advice. There will never be enough doctors, therapists, or financial advisers to meet daily demand. Waiting lists grow. Costs rise. Inequality deepens.

AI is the only realistic way to ensure everyone gets timely, affordable support – raising the standard of living and reducing inequality. What we're advocating for in legal services applies equally to these sectors: contextual liability, user responsibility, and proportionate regulation.

If we get this right in law, it sets a precedent for how the UK regulates AI across all consumer services.

What We Recommended

Our response made four key points:

1. Liability Should Scale with Function

An AI that drafts a document (where you review and sign) is different from an AI that files a court claim without asking you. The first should attract user responsibility; the second should attract vendor responsibility.

2. The Standard of Care Must Be Contextual

A £15/month tool isn't a £500/hour solicitor. The duty of care – as established in cases like Bolam and Montgomery – has always been contextual. That shouldn't change for AI.

3. Users Are the "Human in the Loop"

The user reviews, decides, and acts. That's how responsibility should be allocated for informational AI tools.

4. Vendors Are Responsible for Product Quality

If Caira has a data breach, contains malware, or produces output that contradicts its description, that's on us. The Consumer Rights Act 2015 already covers this. But we can't guarantee you'll make wise decisions with what we provide. That's on you.

What Happens Next?

The UKJT will review all submissions and publish a final Legal Statement later this year (likely mid-2026). This won't be an Act of Parliament – it won't be "the law" in a formal sense – but it will be extremely influential.

When a case involving AI liability comes to court, judges will almost certainly look to the UKJT statement for guidance. Think of it as a senior doctor's consensus statement: not legally binding, but practically decisive.

We're proud to have contributed to this conversation. AI is here, it's helping millions of people, and the law needs to catch up – thoughtfully, fairly, and without undermining the innovation that's making legal help accessible for the first time in history.

We'd Love to Hear From You

Do you think AI should be held to the same standards as solicitors? What worries you most about using AI for legal help? What opportunities excite you?

Get in touch – we'd love to hear your thoughts. Email us at hello@unwildered.co.uk or connect with us on LinkedIn.

What is the UKJT?

The UK Jurisdiction Taskforce (UKJT) is a leading legal body established to provide clarity and confidence in the law as it applies to emerging technologies. Chaired by the Master of the Rolls, Sir Geoffrey Vos, the UKJT brings together legal experts, judges, and industry stakeholders to address legal uncertainties in fast-moving areas such as digital assets, smart contracts, and artificial intelligence.

Since its creation in 2019, the UKJT has published influential legal statements on the status of cryptoassets, smart contracts, and digital securities under English law. These statements have helped courts, businesses, and consumers understand how English law applies to new technologies, and have been cited internationally as authoritative guidance.

The UKJT’s work is not limited to digital assets. It has also developed Digital Dispute Resolution Rules to support rapid, expert-led resolution of disputes in the digital economy, and has contributed to the development of legislation such as the Electronic Trade Documents Act 2023, which allows digital documents to be treated with the same legal effect as paper ones.

Most recently, the UKJT has turned its attention to the question of liability for harms caused by AI. With the rapid adoption of AI in legal, financial, and consumer services, there is growing uncertainty about who is responsible when things go wrong. The UKJT’s Legal Statement on AI liability aims to provide clarity for courts, regulators, and the public, ensuring that English law remains fit for purpose and continues to support responsible innovation.

The UKJT’s work is watched closely by other jurisdictions and often sets the tone for international legal developments. By addressing legal uncertainty, the UKJT helps build market confidence and supports the safe, mainstream adoption of new technologies in the UK and beyond.

Ask questions or get drafts

24/7 with Caira

Ask questions or get drafts

24/7 with Caira

1,000 hours of reading

Save up to

£500,000 in legal fees

1,000 hours of reading

Save up to

£500,000 in legal fees

1,000 hours of reading

Save up to

£500,000 in legal fees

No credit card required

Artificial intelligence for law in the UK: Family, criminal, property, ehcp, commercial, tenancy, landlord, inheritence, wills and probate court - bewildered bewildering
Artificial intelligence for law in the UK: Family, criminal, property, ehcp, commercial, tenancy, landlord, inheritence, wills and probate court - bewildered bewildering