Technology & Tools
AI in advice — what works, what doesn't, in a UK IFA firm
AI tools genuinely help UK IFA firms in five specific places: meeting notes capture, fact-find data extraction, first-pass suitability drafting, fund research summaries and client comms drafting. Each comes with a specific failure mode the firm has to control — hallucinated funds, model-invented FCA references, advice-language drift, mis-calibrated vulnerability handling. The FCA expects the firm to remain accountable for every output. The right way to use AI in an IFA workflow is as adviser-in-the-loop assistance against the firm's real data, not autonomous generation. This pillar article covers each use case with what it does well, what it gets wrong, and the controls that make it safe.
I had coffee in February with an IFA principal — a Bristol firm, eight RIs, two paraplanners — who'd just spent six weeks trialling four different AI tools across his practice. His verdict was, roughly, "two are brilliant at one specific thing, one is dangerously confident, and one is too cautious to be useful." He's not wrong. The 2026 AI-for-advisers market is full of demos that look like the future and outputs that, on inspection, contain a fund that doesn't exist or a PROD justification that misreads the FCA's own paragraph.
This piece is the WA position on where AI helps an IFA firm, where it hurts, and how to draw the line. It's structured around the five places we see AI actually doing work in UK practices, the failure mode each one has, and the control the firm needs around it. Honest about WA's own capability vs roadmap at the end.
1. Meeting notes capture
Where it helps: the recorded-meeting-to-structured-notes workflow. The adviser sees the client for 45 minutes. The tool records (with consent), transcribes, and produces a summary with action items, client objectives in the client's own words, vulnerability indicators flagged for review, and follow-ups assigned. Otter and Fireflies do the transcript layer; specialist adviser tools like Saturn add the IFA-shaped extraction. A paraplanner who used to spend 30 minutes typing up notes from the adviser's scribbles spends five minutes reviewing the structured output.
Where it hurts: misheard names and amounts, over-confident extraction of "client said X" when the client said almost-X, and the regulatory question of what's in the recording. Consent is straightforward (you ask). Data residency isn't — some popular tools store recordings on US servers, which trips a UK GDPR transfer assessment most firms haven't run.
Control: pick a tool with UK or EU data residency; capture consent in the firm's standard client agreement; have the paraplanner review the transcript against the adviser's own recollection before anything is filed.
2. Fact-find data extraction
Where it helps: the prospect emails a stack of PDFs — old pension statements, an ISA wrapper from a different platform, the client's spreadsheet showing planned expenditure. AI parses the PDFs and populates the structured fact-find fields. Names, providers, valuations, fund holdings, contribution histories. What used to take Sarah-the-paraplanner an hour of squinting at small print takes four minutes of review.
Where it hurts: OCR errors on poor-quality scans, hallucinated values where the model fills a blank with something that "looks right", and the failure mode of confidently extracting a fund's ISIN from a statement that doesn't quote one. The model invented it.
Control: every extracted value carries a provenance link to the source page it came from, with a "verify" checkbox. The paraplanner doesn't sign off the field until it's been visually checked. Bulk extraction is fine; bulk auto-trust isn't.
3. First-pass suitability drafting
Where it helps: the structural drafting and the rationale paragraph. The fact-find inputs and the portfolio analysis feed the model; the model produces a draft against the firm's house template. The paraplanner edits the case-specific commentary and the firm's tone. Full discussion of this use case in our AI suitability article.
Where it hurts: invented fund names, mis-cited KIID dates, fabricated FCA paragraph references, paragraphs that drift into advice-language without the right qualifiers, mis-calibrated tone on vulnerability-flagged cases.
Control: facts come from the platform's records, never the model. The model is constrained to recommendation-language with built-in qualifiers. Vulnerability-flagged cases are human-first, model-second. Every draft has named-paraplanner-and-named-adviser sign-off in the audit trail.
4. Fund research summaries
Where it helps: the qualitative side of fund research. The model reads the latest fund manager commentary, the trustee report, the IM update, and produces a one-paragraph summary of how the strategy has moved versus what the firm holds. For a paraplanner reviewing 40 funds across the firm's CIP quarterly, this collapses a half-day to two hours.
Where it hurts: out-of-date information (the model trained on last June's commentary doesn't know about February's strategy change), confused fund codes between share classes, and the model not knowing which version of "Schroder Strategic Income" the firm actually holds.
Control: feed the model the firm's actual current holdings file (with ISIN), and the actual document the firm wants summarised (with date). Never let the model retrieve documents from the internet — too many strategy changes and too many similarly-named funds. The summary is supplementary; the quantitative analysis comes from live Morningstar in WA's portfolio analysisstage.
5. Client communications drafting
Where it helps: the everyday client comms that paraplanners and advisers write hundreds of a quarter. The annual review email, the rebalance notice, the explanation of why an MPS change is happening, the response to a Budget question. The model drafts in the firm's voice; the named sender reviews and sends.
Where it hurts: the slip into advice language ("we recommend you increase your contributions"), the over-soft tone on a vulnerability-flagged client, the use of jargon the firm's house style avoids, and the regulated-financial-promotion question for any client comms that wanders into product mention.
Control: the firm's house style is the model's prompt. Vulnerability-flagged clients get a different prompt that prioritises plain language and longer explanation. Anything mentioning a product or a return runs the firm's standard financial-promotion review.
What the FCA expects, in one paragraph
The FCA's October 2024 AI Update and its Dear CEO correspondence to firms using AI made the position clear and it has held since. The firm is accountable for outputs regardless of how they were drafted. The use of AI must be documented in the firm's controls. The senior manager responsible for the relevant function (under SM&CR) cannot delegate that accountability to a vendor or to a model. There are no special "AI rules" — there's the existing handbook applied to a firm that happens to use AI. The smart firms are over-disclosing AI use, not hiding it. The FCA likes to see it being managed, not concealed.
The Consumer Duty crosscut
Consumer Duty asks whether the firm consistently produces good outcomes. AI can either help that (faster suitability drafts means more time for the actual conversation; structured note capture means clients' vulnerability flags don't get lost) or hurt it (hallucinated facts in a suitability report is exactly the bad-outcome pattern Duty wants firms to evidence they prevent). The framing the FCA has taken in industry roundtables is the framing we'd take: AI is neutral. The controls are what determine whether it produces good outcomes or bad ones.
What WA does and doesn't do today
WA today: meeting notes integration (via partner tools, UK data residency), fact-find data extraction from PDFs with provenance links, suitability drafting against the firm's live record (no model-invented facts), and structured client comms templates drawing on the live client record. We don't host the model — we structure the workflow so AI use is auditable, controlled, and tied to the named human in every case.
WA doesn't today: generate fund recommendations (CIP is the firm's), execute trades or move money (we're not authorised), or write client comms that auto-send without human review. We won't ship anything that crosses the regulated-advice line.
Every Wealth Analytica article is fact-checked against primary sources where applicable. Read our editorial policy for our sourcing and review standards.
Ready to reclaim your Tuesday evenings?
Join the IFAs already growing AUM 35% YoY whilst working fewer hours.