Most professional services firms are stuck in the same place. They know AI is transforming their industry. Partners have dabbled with ChatGPT. Maybe there was a training day. But nothing systematic has happened, and the gap between "we should do something" and "we have done something" keeps growing. This guide is for the person who needs to make a decision about what to do next.

What follows is not a product pitch. It is a framework for thinking about AI strategy in professional services - one that works regardless of which tools you choose, which vertical you operate in, or how large your firm is. The goal is to help you move from uncertainty to a clear, practical plan.

Why Most AI Strategies Fail Before They Start

The most common failure mode is not choosing the wrong tool. It is never getting past the evaluation stage. Firms form committees. They attend conferences. They request demos from six vendors. They run a pilot with three volunteers who were already enthusiastic. Six months later, nothing has changed for the other 95% of the firm, and the pilot group has quietly stopped using the tool because nobody configured it properly for their actual work.

The second failure mode is buying tools before understanding workflows. A managing partner sees a demo, gets excited, and purchases 50 seats of something. The team gets a group email with login credentials and a link to a help centre. Adoption peaks in week two and declines steadily after that. The tool gets quietly written off as "not quite right for us" when the real problem was that nobody did the implementation work.

Committee-driven approaches are particularly lethal. The instinct to form a working group, gather requirements, evaluate options, and build consensus is sensible in most business contexts. In AI, it produces 18-month timelines in a market that is moving in 90-day cycles. By the time the committee has agreed on a shortlist, the technology has moved on and the early adopters in your market have built six months of institutional knowledge that you cannot shortcut.

The real question that most firms avoid is simple and uncomfortable: do you have someone who can actually do the implementation work? Not evaluate it. Not manage a vendor relationship. Someone who understands both your business workflows and the technology well enough to configure tools around how your firm actually operates. If the answer is no, that is the first problem to solve. Everything else follows from it.

The Three Questions That Actually Matter

Before you evaluate a single tool, before you attend another webinar, before you form another working group, answer these three questions honestly. They will tell you more about your AI readiness than any assessment framework.

Where is production work consuming senior time? This is the leverage question. In every professional services firm, there are tasks that require expertise to do well but do not require expertise for every minute of their execution. A partner drafting a client advice letter brings judgment to the analysis but spends significant time on the production: formatting, referencing, structuring the document. A senior recruiter screening candidates brings relationship knowledge to the shortlist but spends hours on the initial review. Find the tasks where senior people are doing production work, and you have found where AI creates the most immediate value.

What would change if first drafts took 15 minutes instead of half a day? This is the capacity question. Most firms are not short of work. They are short of capacity. The constraint is not demand but the number of hours available to produce the work. If the production component of your most common deliverables could be compressed by 70-80%, what would your team do with the recovered time? Take on more matters? Improve quality on existing work? Go home at a reasonable hour? The answer tells you what kind of return you are actually looking for.

Who will own this internally after the external help leaves? This is the sustainability question, and it is the one that most firms skip. Any good AI implementation creates momentum. New use cases emerge. People find novel applications. The tool needs updating as workflows evolve. If you do not have someone inside the firm who understands how the system works and can maintain and extend it, the value will degrade within six months. Identifying your internal champion before you start is not optional. It is a prerequisite.

A Framework, Not a Product Pitch

The approach that works, regardless of sector or firm size, follows three phases. This is not original thinking. It is what every successful implementation I have seen has in common, distilled into a pattern that can be replicated.

Phase 1: Understand

Map the firm's real workflows - not the idealised version from the operations manual, but how work actually gets done day to day. Identify the highest-leverage tasks: the ones where production time is disproportionate to the judgment required. Audit existing tools to understand what the firm already has and where the integration points are. This phase typically takes one week and involves conversations with every function in the firm.

Phase 2: Implement

Configure AI around the firm's actual work. Build reusable skills and templates for the five to ten most common tasks. Train by function, not by department - a lawyer reviewing contracts needs a different setup than a lawyer drafting advice letters, even if they sit in the same team. Test with real work, not hypothetical examples. Iterate based on what people actually find useful, not what looked good in the demo. This phase typically takes one to two weeks.

Phase 3: Sustain

Identify and train an internal champion who can maintain the system after the initial implementation. Establish monthly reviews to measure adoption - not just deployment. Expand use cases as confidence grows. The most valuable implementations are the ones that keep compounding: each month, the team finds new applications, builds new skills, and pushes the boundary of what is possible. This phase is ongoing and is the difference between a project and a capability.

The framework is deliberately simple. The complexity lives in the execution, not the strategy. A beautifully designed AI strategy that never gets implemented is worth less than a rough plan that ships in week one and improves every week after.

Tool Agnosticism vs. Tool Knowledge

The market is moving fast. Claude currently dominates enterprise AI, with Anthropic holding 73% of the enterprise market - up from 40% just three months ago. Their focus on accuracy, safety, and enterprise governance has made them the default choice for professional services. But locking your entire strategy to one vendor is a risk that any experienced business leader should think carefully about.

The right approach is deep expertise in the best current tools combined with a framework that works regardless of which platform leads next year. Today, that means building primarily on Claude for knowledge work, reasoning, and document production. It means using Make.com for workflow automation, Supabase or Airtable for structured data, and platforms like Vercel and Next.js when you need custom applications. The right tool depends on the task, and the best implementations use several tools together rather than trying to force one platform to do everything.

This is an important distinction. A consultant who only knows one tool will recommend that tool for every problem. A practitioner who understands the landscape will recommend the right tool for each specific workflow, and will build systems that can adapt when the market shifts. The firms that build vendor-agnostic capability will be better positioned than the ones that bet everything on a single product.


Enjoyed this? Join the newsletter.

One email a week. What I'm building, learning, and what's actually working. No fluff.

What Good Implementation Looks Like

There is a persistent confusion in the market between "adoption" and "implementation." Adoption is buying seats and hoping people use them. Implementation is configuring the tool around how your firm actually works, so that using it is easier than not using it. The difference in outcomes is enormous.

The best implementations share common characteristics. Someone has spent time inside the firm understanding the workflows before touching any technology. Each person gets a setup configured for their specific role and working style. The training happens on real work, not hypothetical exercises. There is follow-up coaching in weeks two and three, when the initial enthusiasm fades and the real habit formation begins. And there is a clear measure of success that goes beyond "how many people logged in this week."

Bad Implementation

Purchase enterprise seats. Send a group email with login instructions and a link to the help centre. Run a one-hour training webinar. Declare the firm "AI-enabled." Measure success by counting logins. Wonder why adoption drops off after month two. Conclude that "AI isn't quite there yet for our sector."

Good Implementation

Spend a week mapping workflows and identifying high-leverage tasks. Configure the tool for each function: litigation gets different skills than corporate, which gets different skills than client development. Build reusable templates loaded with the firm's own precedents and house style. Train each person individually on their specific setup using their real current work. Follow up at week two and week four to troubleshoot, refine, and expand. Measure adoption by tracking whether people's workflows have actually changed - not whether they logged in.

The gap between these two approaches is not subtle. It is the difference between a tool that gathers dust and a capability that compounds. And the cost difference is surprisingly small relative to the outcome difference. Most of the value comes from the implementation quality, not the technology spend.

The Cost of Waiting

The firms that will struggle most are not the ones that choose the wrong tool. They are the ones that spend 2026 evaluating while their competitors spend 2026 implementing. Alex Lockey

The gap between early adopters and late movers compounds in ways that are not immediately obvious. A firm that implemented AI six months ago has not just saved six months of time. It has built six months of institutional knowledge. Its team knows which prompts work, which workflows benefit most, where the tool falls short, and how to work around its limitations. That accumulated understanding cannot be purchased or shortcut. It can only be built through practice.

The early adopters are also attracting talent. Junior professionals increasingly want to work at firms that use modern tools. They see AI fluency as a career asset, and they gravitate toward environments where they can develop it. The firms that resist adoption will find recruitment harder, not just because they are less efficient, but because they are less attractive to the people they want to hire.

But moving fast and moving recklessly are different things. The firms that rush into poorly planned implementations waste money and, worse, create internal resistance that makes future adoption harder. "We tried AI and it didn't work" is a narrative that is very difficult to reverse once it takes hold. The goal is decisive, informed action - not blind speed. Understand the workflows, implement properly, sustain the change. That sequence matters more than the timeline.

The window for building a genuine competitive advantage through AI implementation is open now. It will not stay open indefinitely. As the tools become more commoditised and implementation expertise becomes more widely available, the advantage will shift from "we use AI" to "we have been using AI well for longer." The compounding starts when you start. The only thing that is certain is that starting later means compounding less.