Is AI Breaking the Legal Profession Or Rebuilding It?
23 Mar, 20265 minutes
Is AI Breaking the Legal Profession — Or Rebuilding It?
Let me start with a story that should make every lawyer uncomfortable.
In June 2025, two separate cases came before the English High Court. In one, a pupil barrister submitted a brief containing five non-existent case citations and the wrong contents of a statute, all of them hallucinated by an AI. In the second, a solicitor filed an application to set aside a court order. A judicial assistant reviewed the 45 citations it contained. Eighteen of them were completely made up. The solicitor, when confronted, admitted using "publicly available artificial intelligence tools." Dame Sharp referred both to their regulators and delivered a pointed warning to the profession: use AI carelessly in these courts, and you risk criminal liability; contempt of court carries a sentence of up to two years.
Those two cases were not isolated incidents. By the end of November 2025, UK courts had recorded 24 documented cases of AI hallucinations in legal filings. Globally, the number had passed 600. There is now a publicly maintained tracker of AI hallucination cases in live legal practice. New entries arrive almost daily.
This is the reality of where the legal profession stands with AI right now: extraordinary potential, real disruption, and a growing trail of professional wreckage from people who didn't treat the technology with enough respect.
Here's what I'm seeing from the recruitment side and what it means for anyone building a legal career or a legal team in 2026.
The Courts Are Already Under Pressure
The High Court cases above involved qualified lawyers. But a parallel, stranger trend has been developing in the lower courts: members of the public using AI chatbots to represent themselves. Legal professionals have a name for it, "vibe lawyering" and it's growing fast.
The most widely reported case involved a web developer named Marc Gunnarsson, who walked into a tax tribunal in May 2025 to contest a £13,000 Covid support dispute. He'd prepared his own submissions using an AI chatbot, complete with supporting case law: Patel v HMRC, Ali v HMRC, Kamran v HMRC. None of those cases existed. The AI had invented all of them.
The judge was lenient, Gunnarsson was a layperson doing his best under time pressure. But the warning was clear: courts won't be this patient forever.
The numbers behind the trend are striking. Since ChatGPT launched in 2022, the proportion of civil court cases where both sides have legal representation has fallen from 53% to 41%. Employment tribunal cases have hit a record 64,000+ open files. Barristers and solicitors are spending hearing time explaining basic procedure to AI-briefed litigants, untangling submissions stuffed with US legal concepts that don't apply in English law, or simply absorbing the chaos of claims that should never have been filed.
The problem, as one barrister put it, is that AI doesn't tell you when to walk away. A good solicitor's most important job is sometimes to tell a client their case isn't worth pursuing. Chatbots aren't designed to do that, they're designed to be helpful and to keep the conversation going. That's a genuinely dangerous quality when someone is making a decision that could cost them their home.
What's Happening Inside Law Firms
While the public AI drama plays out in the tribunals, something just as significant is happening inside firms and it moves faster than most people realise.
96% of UK law firms now integrate AI into at least one aspect of their operations. The elite firms have been at this for years. A&O Shearman was the first law firm in the world to deploy generative AI across its business — back in 2022. Ashurst rolled out Harvey AI across all of its global offices simultaneously. Linklaters has deployed Legora across all 30 of its offices. Freshfields went a different route, partnering with Google Cloud. Clifford Chance built its own internal tool. Bristol mid-tier firm VWV put 23 trainees into teams that pitched AI proofs-of-concept to senior partners, halved note-taking time, and cut contract-review effort by around 80%.
The money following this is enormous. Legal tech funding reached an estimated $6 billion in 2025 alone. Harvey, the AI platform that has become something close to the industry standard for large firms, has raised over $300 million and is reportedly valued at over $8 billion. Its London office grew from around 10 people to more than 75 in the space of a year.
And it's not just the tools. New career paths are opening that didn't exist three years ago. Legal engineers, sitting at the intersection of law, technology, and client delivery, are now one of the most sought-after roles in the sector, typically requiring a few years of legal practice experience before crossing over. Harvey, Legora, and Luminance are all actively hiring them.
For some lawyers, this represents a genuine alternative to the traditional firm ladder. The pay is competitive, the progression faster, and the work genuinely interesting. The trade-off is that the structured path disappears. As one person inside the sector described it: you go from climbing a ladder to navigating a jungle gym.
The Skills Problem No One Has Solved
Here's the tension no one in the profession has quite cracked yet.
The tasks AI is automating fastest: document review, first draft contracts, legal research, due diligence, are exactly the tasks junior lawyers have always used to develop their judgment. That's not a coincidence. Those tasks were never just about getting work done. They were how the profession trained its next generation of senior partners.
A 2026 LexisNexis survey found that 72% of legal professionals identify deep legal reasoning and argumentation as the biggest skills gap among junior lawyers today. A further 69% flagged verification and source-checking as a key concern.
The Australian case of Handa & Mallick makes the point with brutal clarity. A solicitor submitted AI-hallucinated authorities in a family law matter, arguing he hadn't appreciated the risk of hallucination. The tribunal was unmoved, ignorance of a tool's limitations is no defence. He was barred from handling trust money and from practising unsupervised for two years. The message is stark: technological literacy is now part of the duty of competence.
The firms that are navigating this best aren't choosing between AI adoption and training, they’re rethinking how training happens in a world where the traditional workload has changed. Some are running structured verification exercises. Some are introducing AI specifically as a "thinking partner" rather than a task-executor, requiring juniors to challenge outputs rather than accept them. A small number of firms have gone further, temporarily restricting junior AI use so associates develop the baseline skills before they're handed the shortcut.
The 2026 Wolters Kluwer Future Ready Lawyer Survey is clear on where this leaves hiring: 75% of corporate legal departments now consider technological expertise to be extremely important when recruiting.
What This Means If You're Hiring, or Looking
From where I sit, placing lawyers across private practice and in-house, a few things are becoming clear about the market in 2026.
Specialist expertise is more valuable, not less. AI can draft a contract faster than any associate. It cannot advise a client on a complex regulatory exposure, lead a piece of contentious litigation, or navigate the judgment calls in a sensitive employment matter. Demand for genuine subject matter specialists, in regulatory, IP, complex disputes, financial services, is holding strong.
Tech fluency is now table stakes, not a differentiator. Two years ago, a candidate who could articulate their experience with legal AI tools stood out. Now it's expected. What stands out is the ability to use tools critically, to know when to trust the output, when to check it, and when to push back entirely.
In-house teams are a year behind private practice on AI and accelerating fast. General Counsels are being asked to advise their businesses on AI governance, employment implications, IP exposure, and data risk, often with little precedent to work from and regulatory frameworks that are still being written.
The SRA is watching, and the grace period is ending. The SRA's December 2025 thematic review of 25 firms found that only one compliance officer out of 36 could outline all of their regulatory responsibilities around AI. That's not a sustainable position. Regulatory pressure is building, and the firms that have treated AI governance as a box-ticking exercise will find that increasingly exposed.
The Bigger Picture
AI is not going to replace lawyers. That's the wrong question. The right question is what kind of lawyer will thrive in a profession where AI is doing an increasing share of the foundational work and where the consequences of getting that relationship wrong are public, documented, and career-defining.
The lawyers who are thriving right now are the ones who have worked out where AI adds genuine value, where it needs aggressive oversight, and where it should simply not be trusted. That's a judgment call. It's also, ironically, exactly the kind of judgment that AI itself can't make.
The cases flooding the courts and the hallucinated citations appearing in High Court submissions have something important in common: they're not really failures of technology. They're failures of professional judgment. The tools did what they were designed to do. The humans didn't apply the scrutiny the situation demanded.
That scrutiny, rigorous, trained, and hard-won, is what the legal market is looking for right now. It always was. AI has just made it more visible.
You might also like:
Is Building an In House Legal Team A Cost Save or Cost Add?
The Partnership Paradox (Why Brilliant Lawyers Don’t Make Partner)
Try Before You Buy: Why Fractional General Counsels Are Solving the Startup Legal Gap