Every training provider I speak to is stretched. The funding rules change. The compliance burden grows. The Ofsted framework evolves. The employer expectations rise. And through all of it, the same team is expected to do more: develop curriculum, support learners, manage employer relationships, evidence impact, prepare for inspections, and somehow find time to innovate. The margin between operating well and falling behind gets thinner every year.
Most providers know AI is happening. A few staff have tried ChatGPT to draft lesson plans or write marketing content. Maybe someone experimented with generating learner feedback. But nothing has changed how the organisation actually operates. The IQA reports still take the same number of hours. The funding claims still get assembled the same way. The employer progress reports still get written from scratch each time.
Meanwhile, something is shifting. A small number of organisations have stopped treating AI as a curiosity and started treating it as core operational infrastructure. Not a lesson planning shortcut. Not a chatbot. A fundamental part of how the provider produces work, manages compliance, and delivers quality. The results are significant enough that the rest of the sector needs to pay attention.
The Training Provider Productivity Problem
Training providers have a structural tension that most people outside the sector do not fully appreciate. The work that creates the most value (teaching, coaching, employer engagement, learner support) is constantly squeezed by the work that consumes the most hours (compliance, administration, reporting, evidence gathering, quality assurance documentation).
For apprenticeship providers, this tension is even more acute. The funding rules and compliance requirements are substantial. Every learner needs an individual learning plan. Progress reviews must be documented at regular intervals. Off-the-job training hours must be evidenced. Employer engagement must be recorded. Functional skills support must be tracked. Gateway readiness must be assessed. And all of it must be audit-ready, because ESFA and Ofsted can and do check.
The result is that trainers, assessors, and quality teams spend a disproportionate amount of their time producing documentation rather than doing the work the documentation is supposed to evidence. The IQA who could be coaching assessors is instead writing sampling reports. The trainer who should be developing innovative delivery methods is formatting evidence portfolios. The employer engagement manager who should be out building relationships is compiling progress reports.
Every improvement the sector has adopted has been incremental. Better e-portfolio systems. Streamlined funding claim software. Template libraries for ILPs and reviews. Each one helps. None of them change the fundamental ratio between productive work and administrative overhead.
AI, implemented properly, changes the ratio.
Why Previous AI Attempts Have Not Worked
Most training professionals who have tried AI have had a familiar experience. They asked ChatGPT to write a lesson plan. The output was technically competent but generic. It did not reflect the qualification specification. It did not align with the provider’s delivery model. It did not account for the employer’s workplace context. It read like it was written by someone who had never delivered training. They closed the tab and went back to doing it manually.
The problem was never the capability of the AI. The problem was that a general-purpose chatbot with no knowledge of your standards, your programmes, your learner cohorts, your employer partnerships, or the regulatory framework you operate within will always produce generic output. It is the equivalent of asking a supply teacher to write your curriculum. They might understand pedagogy, but they do not understand your context.
The current generation of AI (Claude in particular) works differently. It is not a pre-built education tool. It is a general intelligence layer that you configure around your organisation. You teach it your delivery model, your qualification standards, your assessment criteria, your house style for learner materials, your IQA frameworks, and the specific requirements of the funding bodies and awarding organisations you work with. Then it applies that understanding across everything it produces.
What “Running on Claude” Actually Means for a Training Provider
A properly configured Claude implementation works in layers. Each one builds on the last, and together they transform what a training team can produce.
Layer 1: Personalisation
Every user gets their own configuration. Not a shared login. A setup that reflects how they specifically work. A trainer’s preferred approach to session planning. An assessor’s style for writing learner feedback. A quality manager’s format for IQA reports. Claude learns these preferences and applies them consistently.
This is the detail that makes adoption stick. The reason most training staff abandon AI tools is that the outputs do not sound right. Learner feedback that reads like it was written by a machine undermines trust. When Claude is configured to a specific person’s style, the output sounds like their work. The assessor reviews and adjusts rather than rewrites.
Layer 2: Shared Projects
A Project in Claude is a persistent workspace loaded with your organisation’s content. Think of it as giving Claude institutional memory.
A programme Project might contain: the qualification specification, the assessment plan, your scheme of work, session plans from previous cohorts, employer briefs, sample assignments, marking rubrics, and examples of learner work at distinction level. When a trainer works within that Project, Claude draws on all of it. The session plan it helps draft aligns with the spec. The assignment brief reflects your assessment criteria. The learner feedback references the actual standards.
An apprenticeship Project might contain: the apprenticeship standard, the assessment plan, the EPA specification, your employer handbook, the off-the-job training policy, example progress review templates, gateway checklists, and the ESFA funding rules relevant to that programme. Every piece of work produced within that Project is grounded in the right regulatory context.
Layer 3: Skills
This is where the real leverage sits. A Skill is a reusable instruction set that encodes how your organisation does a specific task. Not a template. A complete workflow that captures your quality standards, your regulatory requirements, and your organisational voice.
Example: Progress Review Skill
An assessor has just completed a learner progress review. They need to write up the record: progress against the learning plan, employer feedback, off-the-job hours reconciliation, functional skills update, targets for the next period, and any support needs identified. This write-up, done properly, takes 30 to 45 minutes per learner.
With the progress review Skill, the assessor inputs their notes from the conversation and the Skill produces a structured write-up in your organisation’s standard format: progress mapped against specific KSBs, off-the-job hours calculated and evidenced, SMART targets set for the next review period, support needs flagged, and the record formatted ready for upload to the e-portfolio. The assessor reviews, adjusts anything that needs their professional judgment, and moves on. What took 45 minutes takes 10. Multiply that by a caseload of 40 learners and the time recovered is transformative.
Example: IQA Sampling Report Skill
The quality team needs to produce IQA sampling reports across multiple programmes. Each report requires a review of assessor decisions, analysis of learner evidence, verification of assessment methods against the qualification specification, and recommendations for assessor development. The IQA Skill takes the sampling data, cross-references it against the assessment criteria, and produces a structured report in your organisation’s format: assessor performance analysis, learner evidence evaluation, areas of good practice, actions required, and a standardisation summary.
The quality manager’s time shifts from writing reports to acting on the findings. The IQA process becomes a genuine quality improvement mechanism rather than a documentation exercise.
Example: Session Planning Skill
A trainer needs to plan a sequence of sessions for a new cohort. The session planning Skill takes the qualification unit, the learner profile, and the employer context, and produces a structured session plan: learning objectives mapped to the specification, differentiated activities, employer-relevant scenarios, assessment opportunities embedded, and resources identified. The plan reflects your organisation’s delivery model, not a generic template from the internet.
For apprenticeship delivery, the Skill maps activities against the knowledge, skills, and behaviours in the standard, calculates the off-the-job contribution, and flags where workplace evidence opportunities should be built in. The trainer’s time goes to refining and contextualising, not building from a blank page.
Example: Employer Reporting Skill
An employer partner wants a quarterly update on their apprenticeship cohort. The employer reporting Skill compiles the data: individual learner progress against the programme timeline, attendance and engagement metrics, upcoming milestones (gateway, EPA), any concerns flagged, and a summary of the value being delivered. The output is a professional report that demonstrates the provider’s quality and strengthens the employer relationship.
The report that used to take half a day to compile is ready for review in minutes. The employer engagement manager spends their time on the conversation, not the document.
Each Skill gets refined over time. The progress review Skill that works for a Level 3 Business Administrator apprenticeship gets adapted for a Level 5 Operations Manager. The session planning Skill gets tuned for different qualification types. The library grows, and the quality of every output improves because each iteration builds on what came before. After six months, your organisation’s Skill library represents institutional knowledge, encoded and reusable across the team.
Layer 4: M365 Integration
Claude Team Plan connects natively to Microsoft 365: Outlook, SharePoint, OneDrive, Teams. This means Claude can read your existing documents, search your email correspondence, summarise learner communications, and work with the files already in your systems. No copying content between platforms. No switching between tools. Claude operates inside the infrastructure you already use.
For a training provider running on Microsoft (which is the majority), this removes the adoption barrier that kills most new tools: the friction of changing how people work.
Enjoyed this? Join the newsletter.
One email a week. What I'm building, learning, and what's actually working. No fluff.
Free. Unsubscribe anytime.
The Apprenticeship Angle
Apprenticeship delivery is where AI implementation has perhaps the most dramatic impact, because the compliance and documentation burden is so high relative to most other provision types.
Consider the lifecycle of a single apprentice. Recruitment and initial assessment. Individual learning plan. Regular progress reviews (typically every 4 to 8 weeks). Off-the-job training evidence. Functional skills tracking and support. Employer engagement records. Gateway preparation. EPA readiness assessment. Each stage generates documentation. Each document must meet specific standards. The cumulative administrative load per learner is substantial, and most providers are managing caseloads of 30 to 50 learners per assessor.
A properly configured Claude implementation does not eliminate any of these requirements. It transforms how quickly and consistently the documentation is produced. The assessor still has the conversation. They still apply their professional judgment. They still make the decisions about learner progress. But the two hours of write-up that follows each review day becomes thirty minutes of reviewing and refining AI-produced drafts that already reflect the right format, the right standards, and the right language.
For providers operating at scale, the arithmetic is compelling. An assessor with 40 learners doing 8-weekly reviews produces 260 progress review write-ups per year. At 45 minutes each, that is nearly 200 hours of documentation per assessor per year. Reduce that to 15 minutes through proper AI implementation and you recover over 130 hours, per assessor, annually. That is time that goes back into learner contact, employer engagement, and quality improvement.
The Economics
The cost structure is almost trivially small. A Claude Team Plan costs $20 per user per month. A team of fifteen staff costs roughly £2,850 per year. That is less than the cost of a single part-time administrator. The return depends entirely on how well the tool is implemented.
The calculation that matters for training providers is capacity. If proper AI implementation recovers even one hour per staff member per day, that is five hours per week redirected from administration to delivery, learner support, quality improvement, or employer engagement. For a team of fifteen, that is 75 hours per week of reclaimed capacity. In a sector where margins are tight and funding rates are fixed, capacity is the variable that determines whether you can grow, improve, and sustain quality.
But the quality argument is at least as important as the efficiency one. When IQA reports are more thorough, assessment becomes more consistent. When progress reviews are better documented, learner support improves. When employer reports are more professional, partnerships strengthen. When session plans are more carefully mapped to specifications, delivery quality rises. The outputs get better because staff spend their time on thinking and judgment rather than formatting and typing.
The Adoption Gap
Right now, most training providers are in one of three positions. Some have ignored AI entirely. Some have let individual staff experiment in an unstructured way. A very small number have implemented it properly around how their organisation actually works.
The gap between the third group and the first two is widening every month.
Anthropic, the company behind Claude, now holds 73% of the enterprise AI market, up from 40% just three months ago. Their platform is purpose-built for organisations that need accuracy, data privacy, and document security. They do not train on your data. They offer SSO, centralised admin controls, and usage analytics. The data governance questions that legitimately concerned providers two years ago have been resolved. Learner data stays within your organisation’s control.
The barrier is not cost. It is not security. It is not the technology. The barrier is implementation. Someone needs to understand how your organisation actually works (not the self-assessment version, the real operational reality), configure the tool around those workflows, build the Skills that encode your quality standards, and train your people so that adoption sticks beyond the first fortnight.
What Proper Implementation Looks Like
If you are a managing director, head of quality, or curriculum lead reading this and thinking about making a move, here is what doing it properly involves.
First, map the real workflows. Not the process map on the quality manual. The actual daily reality of what your team produces, where time disappears, and which tasks would be transformed by having an intelligent system configured around your organisation. An apprenticeship provider has different workflows to a commercial training company. An Ofsted-regulated provider has different compliance needs to a CPD specialist. The implementation must reflect how your organisation actually operates.
Second, configure Claude around your organisation. Load your quality frameworks, your assessment templates, your house style for learner materials, your IQA documentation, your employer reporting formats, and your best examples of good practice. A generic Claude account is useful. A Claude account loaded with your organisation’s accumulated standards and content is transformative.
Third, build Skills for your highest-frequency tasks. Identify the five or ten tasks that consume the most staff hours across the organisation. Build a Skill for each one. Progress reviews. IQA reports. Session planning. Employer updates. Learner feedback. Self-assessment inputs. Each Skill captures not just the process but your organisation’s quality standards and voice.
Fourth, train the people individually. Group training is a starting point, but adoption lives or dies at the individual level. An assessor needs a different configuration to a curriculum designer. A quality manager needs a different setup to a business development lead. The implementation has to reflect how each person actually works.
Fifth, sustain it. The organisations that get transformative results are the ones that keep building. New Skills get created as confidence grows. The library expands into new use cases. An internal champion maintains and evolves the setup. This is not a one-off project. It is a new operating capability for the organisation.
The Question That Matters
The question facing UK training providers right now is not whether AI will change the sector. That question has been answered. The question is whether you implement it properly (configured around how your organisation actually works, with the depth of setup that produces real results) or whether you buy seats, run a CPD session, and end up in the same position six months from now.
I work with professional services organisations and training providers to do the first version. A focused enablement sprint that configures Claude around your operation, builds the Skills your team needs, loads your content, and trains your people so that by the end of the engagement, every person has a working system they will actually use.
Not a product. Not a training day. Not a slide deck about what AI could theoretically do. A working implementation, configured to your organisation, ready to use.