Your Firm Is Using AI. It Is Probably Not Working.
The tool is not the problem. Here is what is.
Across professional services - law, accounting, financial advisory, consulting - firms now have broadly similar access to AI tools. The technology gap that early adopters enjoyed in 2023 has closed. Everyone has a licence. Most have run a pilot. Some have a policy.
And yet the productivity gains are, in most firms, underwhelming. Associates use AI to draft and then spend as long editing as they would have spent writing. Partners try it once, get a generic output, and conclude the technology is overhyped. Automation pilots quietly stall after the first quarter.
The firms drawing real value from AI - measurably shorter matter cycles, higher realization rates, fewer write-downs on routine work - are not using better tools. They have built five specific capabilities that most firms not have. The gap is not technological. It is organizational, and it is widening.
Deploying a tool and building a capability are two entirely different things. The firms pulling ahead understood this early. Most firms have not yet confronted it.
These five capabilities are not difficult to build - but they will not emerge on their own from individual experimentation. Here is what they are, why most firms are missing them, and what it costs when they do.
SKILL 01. Knowing How to Give an Instruction
The single most common reason AI disappoints inside professional services firms is the simplest one: the instructions are too vague to produce useful output.
Nobody was taught to brief an AI system. The instinct is to treat it like a search engine. But AI works like a highly capable colleague who knows nothing about your client, your matter, or your firm's standards. Brief it poorly, and it produces something generic. Brief it well, and the output is often startlingly good.
A strong instruction has four components:
Role - tell it what expert it is working as. Not "help me review this" but "act as a senior M&A partner advising a UK acquirer on a software acquisition."
Context - the specific facts of the situation. Deal structure, regulatory environment, client's commercial priorities, and known risks.
Command - a precise instruction. "Identify the top three risks to my client, ranked by severity" not "have a look at this clause."
Format - how you need the output. Risk matrix, client-ready paragraph, options table with trade-offs, two-page memo.
The difference between a vague prompt and a structured one is not a small difference. It is the difference between output that needs to be completely rewritten and output that needs light editing.
THE COST OF MISSING THIS
Every fee earner who has not been taught this is leaving the majority of the tool's value on the table.
Time spent editing poor AI output is not a technology problem. It is a briefing problem, and it compounds across every session, every day.
SKILL 02. Stopping the Cold Start
Every time a fee earner opens a new AI session without providing context, the AI knows nothing. Not your firm's communication standards. Not the client's commercial priorities. Not the partner's risk threshold. Not that your firm never uses passive voice in client letters, or that this client's board needs numbers in plain English.
The result is output that is technically competent but institutionally generic. It has to be.
The fix is a master prompt: a one-to-two page document each professional builds once and uploads at the start of every session. It covers:
Their role and practice area
The types of clients and matters they typically work on
The firm's communication and quality standards
Standing format preferences - how work should be presented
Any context the AI should always hold in mind
The impact on output quality is immediate. One accounting firm that piloted this approach found that first-draft AI outputs went from requiring substantial reworking to being largely usable with light review - not because the AI got smarter, but because it finally had the context it needed.
Building a master prompt takes about 20 minutes. The fastest method: tell the AI to act as an interviewer and ask you everything it needs. Answer using voice-to-text. Let AI compress your answers into a clean document. Save it as a PDF. Upload it at the start of every session.
SKILL 03. Treating the First Draft as the Starting Point
The professionals who get the most from AI are not the ones who write the best first prompt. They are the ones who know what to do after the first output arrives.
Most people prompt once, get something imperfect, and either accept it or abandon it. The gap between those two outcomes and between adequate AI output and genuinely excellent output is almost always the quality of the iteration in between.
Effective iteration follows a simple discipline:
Read the output as a senior reviewer, not a user. What is missing? What is overstated? What would make a client, a judge, or a counterparty push back?
Give directional feedback, not vague feedback. Not "make it sharper" but "the opening buries the key risk. Lead with that. Cut the background entirely. End with a clear recommendation, not a list of considerations."
Repeat until the output meets the firm's standard. Three rounds of precise iteration consistently produce work indistinguishable from your team's best and get there faster.
The gap between mediocre AI output and excellent output is almost never the model. It is the quality of the conversation between draft one and draft two.
SKILL 04. Treating the First Draft as the Starting Point
By default, AI is agreeable. Ask it to review your advice and it will find things to praise. Ask it to assess your strategy and it will tell you it looks sound. For senior professionals who need their thinking stress-tested, this is almost completely useless.
The skill is prompting AI as a challenger - not a collaborator. Before finalising advice or a recommendation, use it to find the holes.
Where this matters most in professional services:
Ask the AI to take opposing counsel's position and identify the three weakest points in your argument.
Ask it what assumptions in the deal structure are most likely to be wrong, and what the model cannot see.
Ask it to act as the most skeptical regulator reviewing the filing and list every gap they would flag.
Ask it to argue the strongest case against the course of action you are about to recommend.
This is not about replacing professional judgment. It is about pressure-testing it before it reaches the client - at a moment when the cost of a flaw is still zero. The partners who use AI this way consistently find it surfaces risks they had underweighted and exposes assumptions they had not examined.
When the AI surfaces a challenge you agree with, update your master prompt so the same blind spot does not appear in future work.
SKILL 05. Managing AI as Firm Infrastructure
Here is what typically happens six months after a firm deploys AI. Individual fee earners have each developed their own prompts. Some are excellent. Most are inconsistent. Nobody is sharing what works. The quality of AI-assisted output varies sharply between team members doing identical types of work, and partners have no visibility into the variance, because AI usage is invisible in a way that associate work is not.
The knowledge built through experimentation is trapped in individual laptops and chat histories. When someone leaves, it leaves with them. When a new associate joins, they start from scratch. The firm's collective AI capability does not compound. It resets.
The firms that avoid this problem treat AI infrastructure the same way they treat any other knowledge management system:
A shared prompt library for recurring tasks - deal reviews, client risk summaries, regulatory analyses, board reports - organized by practice area and kept current.
Team-level master prompts for shared workstreams, so multiple fee earners working on the same matter are working from the same foundation.
A quarterly review process: the prompts producing the best outputs replace the ones that are not.
One person with the authority to set standards and the discipline to maintain them - typically a practice group leader or an operationally-minded partner.
None of this is a technology project. It does not require a new platform or IT involvement. It requires one decision: that AI capability is a firm asset, not an individual habit.
The question is not whether your people are using AI. It is whether your firm owns the capability - or whether it is scattered across a hundred individual workflows that disappear when someone closes their laptop.
Why These Five, and Why in This Order
These capabilities are not independent. They build on each other in a specific sequence:
Structured prompting gives you better raw outputs.
The master prompt makes those outputs consistent and firm-specific.
Output iteration closes the gap between good and excellent.
AI as a challenger improves the underlying thinking, not just the presentation.
Knowledge base architecture makes all of it scalable so the value is not trapped in individual hands but available to the whole firm.
Most firms are somewhere in the middle of this sequence. Progress on skills one and two, partially. Struggling with three and four because nobody has named them as capabilities that need to be built. And not yet confronting five, because it requires a decision most managing partners have not yet been asked to make explicitly.
The window to build this advantage is open. The technology is accessible, the investment is modest, and the organizational changes required are well within reach of any firm that decides to make them. What is required is not a technology strategy. It is clarity of intent and the organizational will to build capability systematically rather than wait for it to emerge.
Where is your firm in this sequence?
Most firms cannot accurately answer that question from the inside. The gaps are invisible precisely because AI usage is individual and informal - nobody has a clear view of where capability exists and where it does not.
ValueLab's AI-Ready Diagnostic was built to answer that question in 10 minutes. It maps where your firm stands across each of these five dimensions and identifies where the highest-value opportunities are and where the hidden costs of the current approach are accumulating.
It is not a sales tool. It is a diagnostic. Some firms find they are further along than they thought. Most find two or three specific gaps they had not clearly named. Either way, you leave knowing something concrete.
Marta Hyland is the founder of valuelab, an AI advisory consultancy that helps professional services firms implement AI where it creates measurable business value. She works exclusively with law firms, accounting practices, and financial advisory businesses that are serious about getting this right.
The valuelab AI-Ready Diagnostic is where most firms start. It identifies your highest-value AI opportunities and tells you exactly where to focus first.