How the Sausage Is Made (and Why “AI-Native” Is the New Snake Oil)

AI legal practice - How the Sausage Is Made and Why AI-Native Is the New Snake Oil - Bratby Law data protection

AI in legal practice is not new. Lawyers have always used the best available tools, from typewriters to word processors, from Lexis to document automation. Artificial intelligence is the latest iteration, not a revolution that displaces the profession. Zack Shapiro’s viral post on building a “Claude-native” law firm attracted over seven million views in March 2026 and triggered a profession-wide debate about whether AI changes everything. I think Shapiro is right that AI changes the game. Where I part company with the hype is on what “changing the game” actually means. In AI legal practice, the differentiator is not the AI. It is the lawyer’s application of it.

What clients are actually paying for

The most important thing a lawyer does is understand what the client needs, which is often not what the client says they need. A client calls and says they want a contract reviewed. What they actually need is someone to tell them whether the deal makes commercial sense, where the real risk sits, and what they can push back on without losing the relationship. That understanding does not come from a language model. It comes from thirty years of advising businesses on regulated markets, from sitting in boardrooms where the stated concern and the real concern are different things, and from knowing the people and institutions involved well enough to read between the lines.

Judgment is the second irreplaceable element. Legal judgment is not the sum of knowledge available to train an AI model. It is the ability to weigh competing considerations, tolerate ambiguity, and make a call that accounts for commercial reality as well as legal principle. It is knowing that a provision which looks standard in one context is a dealbreaker in another. It is knowing when to advise a client to accept imperfect drafting because the commercial relationship matters more than the clause. No amount of training data produces that. It is built through experience, and it carries a premium because clients cannot get it anywhere else.

The regulatory framework for AI legal practice

The Solicitors Regulation Authority has adopted a principles-based approach to AI in the legal market. The SRA’s position, set out in its November 2023 Risk Outlook report and subsequent compliance guidance, is clear: solicitors and firms may use whatever technology they consider appropriate, provided they comply with the SRA Principles and the SRA Code of Conduct for Solicitors (last updated 16 December 2024). The technology is the solicitor’s choice; the professional obligations are not.

Three obligations matter most. Principle 7 of the SRA Principles requires solicitors to act in the best interests of each client. Rule 3.2 of the Code of Conduct requires a competent standard of service. Rule 4.3 requires solicitors to give clients the information they need to make informed decisions. None of these obligations is modified or diluted by the use of AI. As the SRA has stated, a solicitor’s professional duties apply to work carried out by the solicitor regardless of whether AI was used, and regardless of whether AI was used by the solicitor personally or by someone under their supervision.

The Law Society of England and Wales published practical guidance on generative AI in May 2025, reinforcing the SRA’s position: practitioners must define the purpose and use cases of any AI tool, comply with the SRA Code, and maintain professional responsibility for all outputs. The Data (Use and Access) Act 2025, which received Royal Assent on 19 June 2025, amends the UK GDPR framework for automated decision-making but does not alter the professional obligations that sit on top of it.

What AI does well in AI legal practice, and what it cannot do

AI is good at speed and pattern recognition. I use AI tools daily at Bratby Law for regulatory research, first-draft generation, document review and scanning regulatory publications from Ofcom, the ICO, the FCA and the PSR. An AI tool can read a 200-page consultation document and extract the relevant obligations in minutes. It can produce a first draft of a client memo that captures 80% of the analysis. It can cross-reference a contract against a set of standard positions and flag deviations. These are tasks that previously took hours. AI compresses them into minutes. That compression is valuable, and I pass that value to clients through faster turnaround and lower cost.

At Bratby Law, I use a range of AI tools, each chosen for what it does best. For general research and drafting, I use Claude in Cowork mode (Anthropic) as my primary working environment, alongside enterprise ChatGPT (OpenAI) and Gemini (Google). But general-purpose large language models are only part of the picture. For legal-specific work, I rely on Lexis+ AI (LexisNexis) for case law and statutory research, vLex for cross-jurisdictional analysis, and Law Insider for contract clause benchmarking. The point is that a good lawyer uses the right tool for the job, not just the headline consumer AI products. Legal-specific tools trained on legal data have a place alongside general-purpose LLMs, and knowing which tool to reach for is itself a form of professional judgment.

Within Claude, I have built custom skills that automate recurring workflows in my practice: regulatory monitoring that scans new publications from Ofcom, ICO, FCA and PSR; document review against standard positions; policy drafting from templates; and a post-generator skill that follows Bratby Law’s brand voice, article structure, citation standards and source requirements. That last one is worth dwelling on. A sole practitioner running a specialist regulatory practice does not normally have time to publish weekly commentary. AI makes that possible. I now publish two to three articles a week on bratby.law, each grounded in primary sources and each reflecting a considered position on a regulatory development. Before AI, that output would have required a dedicated research and marketing team. Now it requires me, the right tools, and a workflow built to match how I actually work.

I use the same approach on client work. When I take on a new matter, I configure a project workspace in Claude that captures the client’s specific requirements: the terms of the shareholders’ agreement, the applicable regulatory framework, competition constraints, and any bespoke conditions that affect the scope of advice. Every document review then runs against those parameters automatically. If a client’s SHA reserves certain decisions to the board or requires investor consent for material contracts, the AI flags relevant provisions in every draft it touches. If the regulatory position requires a specific licence condition or notification, the tool checks for it. This is not the AI exercising judgment. It is the AI applying a checklist that I have defined, based on my understanding of the client’s position. The result is more consistent and more thorough document review, because the machine does not forget to check a constraint that a tired lawyer might overlook at midnight. But the checklist itself, the decision about what matters and what to look for, is mine. The quality improvement is real. The substitution for judgment is not.

But it is still me. I choose the topic. I decide the angle and the line to take. I review and edit every piece before publication. The AI accelerates production; it does not set the editorial direction. If the tool produced something I disagreed with, or that missed the point, or that lacked the regulatory context to be useful, I would not publish it. This is a concrete example of the thesis: AI as ingredient, not recipe. The lawyer’s judgment on what to write about and what position to take is the irreplaceable part. The drafting and formatting are the parts AI compresses.

None of this is a superpower. The tools to build these workflows are available to every solicitor in England and Wales today. The barrier to adoption is cultural and commercial, not technical. Any competent lawyer who invests the time can do what I do with AI. The question is whether firm structures and incentives allow it.

Shapiro’s experience at Rains LLP illustrates the broader point well. A two-person firm competing with firms of hundreds of lawyers, using AI to move faster and produce more thorough work product. I recognise that picture. I do the same thing. But what makes it work is not the AI. What makes it work is that Shapiro knows his area of law well enough to direct the AI effectively and to catch it when it is wrong. The tool amplifies the lawyer; it does not replace the lawyer.

What AI cannot do is exercise the judgment I described earlier. It cannot pick up the phone to a regulator and explain why a client’s position is reasonable. It cannot look at a set of facts and know, from thirty years of regulatory experience, that this particular point is the one that will matter at enforcement stage. When I review AI-generated research, I am applying judgment built over three decades at Oftel and as General Counsel at regulated telecoms and payments businesses. I know what Ofcom is likely to prioritise because I have sat on the regulatory side of that table. An AI model does not have that context.

The “AI-native” marketing problem

A growing number of law firms and legal services businesses position AI as their core differentiator. The pitch runs: we are “AI-native”, therefore we are faster, cheaper and better. Some are backed by significant private equity investment. The marketing implies that the AI is doing the lawyering.

This is misleading. The differentiator is not the tool. Every competent law firm has access to the same AI platforms. The differentiator is the quality of the lawyer using the tool and the judgment they apply to its outputs. A client instructing Bratby Law on a telecoms regulatory question is paying for my knowledge of how Ofcom enforces General Condition compliance, my experience of Communications Act 2003 disputes, and my ability to give a clear answer on risk. AI helps me get there faster. It is not the answer itself.

The SRA’s Risk Outlook report identifies a specific risk here: that AI-generated outputs may be presented to clients without adequate human review, leading to inaccurate or misleading advice. The risk is amplified when firms market AI capability rather than legal expertise. If the AI is producing the first draft and a junior lawyer is checking it, that is a different service from a senior specialist who uses AI to accelerate their own analysis. Both are legitimate. But they should be described and priced honestly.

The structural problem that “AI-native” does not solve

The real challenge for law firms adopting AI is not the technology. The technology is available to everyone. The challenge is structural. Law firms are staffed on a leverage model built around junior fee-earners doing volume work: research, first drafts, document review. AI now compresses that work dramatically. A task that justified four hours of a trainee’s time can be done in twenty minutes by a senior lawyer with the right tools. That is good for clients. It is a problem for the business model.

Law firms also charge by the hour, which means efficiency is penalised rather than rewarded. A partner who uses AI to halve the time on a piece of work earns less for the firm under an hourly model. AI legal practice, done properly, should reduce the hours billed and increase the value delivered. But that requires rethinking both staffing and pricing, and most firms have not started that conversation. An “AI-native” pitch does not address either of these structural problems. It puts a new label on the same model. The firms that will thrive are the ones that rethink how legal work is staffed, priced and delivered to reflect what AI actually changes about the economics of legal services.

The premium on human judgment in AI legal practice

In my experience advising telecoms operators, fintechs and PE investors, the questions that matter are never the ones AI can answer on its own. They are: is this risk real or theoretical? Will the regulator actually enforce this? What does this contract position mean in practice for the deal? Can we push back on this point without losing the relationship? What is the client actually worried about, beneath the question they have asked? These are judgment calls. They require experience, commercial awareness and, often, the ability to hear what a client is not saying.

I use AI at Bratby Law because it makes me a better lawyer, not because it replaces the need for one. Shapiro is right that AI changes what a small firm can achieve. But the value proposition is the combination: specialist regulatory knowledge built over thirty years, accelerated by tools that compress research and drafting time. AI legal practice done well means the lawyer directs the technology, not the other way round. Clients who want the tool without the judgment will get what they pay for. Those who understand that the sausage is made by the lawyer, with AI as an ingredient rather than the recipe, will get better outcomes.

Key sources

For advice on AI governance, data protection compliance or the regulatory implications of AI-enabled products, contact Rob Bratby at Bratby Law. Bratby Law advises telecoms operators, fintechs and PE investors on telecoms regulation, data protection and payments regulation.

Select topics of interest

Similar Posts