Awarding organisations and EPAOs sit at one of the most interesting pressure points in UK education. You are responsible for the integrity of qualifications and assessments. You operate under Ofqual or IfATE oversight. You serve hundreds of centres and thousands of learners. And most of you are doing it with teams that are far smaller than the scope of the operation would suggest.
The workload is relentless. New qualifications need developing. Existing qualifications need updating. Assessment materials need writing, reviewing, moderating, and standardising. Centre approval applications need processing. External quality assurance visits need scheduling, conducting, and reporting. Regulatory submissions need preparing. And through all of it, the product development pipeline that determines the organisation’s future revenue is competing for the same people’s time as the operational compliance that keeps the lights on.
Most awarding organisations know AI is changing things. Some staff have experimented. But nothing has fundamentally changed how the organisation produces work. The question paper development cycle still takes the same number of months. The EQA reports still take the same number of hours. The new qualification specifications still require the same marathon of drafting, reviewing, and regulatory submission.
That is not because AI cannot help. It is because nobody has implemented it around how an awarding organisation actually operates.
The Dual Challenge: Operations and Product Development
Awarding organisations face a challenge that most other professional services organisations do not: they must simultaneously run a complex operational engine (assessment delivery, quality assurance, centre management, regulatory compliance) while also developing new products (qualifications, apprenticeship standards, EPA specifications) that determine future revenue.
In practice, operations almost always wins. The assessment materials need to be ready for the window. The EQA visit is next week. The centre approval application needs a decision. The Ofqual conditions of recognition need evidencing. Product development gets whatever capacity is left, which is rarely enough.
This creates a strategic problem. The organisations that develop new qualifications faster, respond to market demand sooner, and bring better products to centres more quickly are the ones that grow. But the capacity to do that development work is consumed by operational delivery. It is a trap that most awarding organisations recognise but struggle to escape.
AI, implemented properly, changes both sides of the equation. It accelerates the operational work so that it consumes less capacity, and it transforms the product development process so that new qualifications can be brought to market faster and with higher quality.
Why Generic AI Has Not Worked
The awarding organisation that tried ChatGPT to draft a question paper discovered what everyone in the sector already suspects: a generic AI that does not understand your qualification specification, your assessment strategy, your command verb conventions, your mark scheme structure, or Ofqual’s requirements for assessment validity will produce output that is at best a starting point and at worst actively misleading. The subject specialist spent longer fixing the output than they would have spent writing it from scratch.
The current generation of AI (Claude in particular) works differently. It is not a pre-built assessment tool. It is a general intelligence layer that you configure around your organisation. You teach it your qualification structures, your assessment methodologies, your command verb taxonomy, your mark scheme conventions, your house style, and the regulatory framework you operate within. Then it applies that understanding across everything it produces.
The difference is the gap between a general knowledge chatbot and a colleague who has spent five years in your product development team. The chatbot generates text. The colleague understands context, quality standards, and regulatory requirements.
What “Running on Claude” Actually Means
A properly configured Claude implementation works in layers. Each one builds on the last.
Layer 1: Personalisation
Every user gets their own configuration. A subject specialist’s preferred approach to writing assessment criteria. An EQA’s style for centre reports. A product developer’s format for qualification specifications. Claude learns these preferences and applies them consistently, so outputs sound like they were produced by the person who will sign them off.
Layer 2: Shared Projects
A Project in Claude is a persistent workspace loaded with your organisation’s content.
A qualification Project might contain: the qualification specification, the assessment strategy, all current assessment materials, the mark scheme, chief examiner reports from previous series, grade boundary data, centre feedback, and the relevant Ofqual General Conditions. When a subject specialist works within that Project, Claude draws on all of it. The draft question paper aligns with the specification. The mark scheme follows your conventions. The assessment is designed with validity and reliability principles built in from the start.
An EPA Project might contain: the apprenticeship standard, the assessment plan, the EPA specification, the grading descriptors, examiner guidance, sample assessment materials, and IfATE/Ofqual requirements for the specific standard. Every piece of assessment material produced within that Project is grounded in the right regulatory and professional context.
Layer 3: Skills
This is where the real leverage sits, and where the product development angle becomes genuinely exciting.
Example: Question Paper Development Skill
A subject specialist needs to produce a new question paper for an upcoming assessment window. The question paper Skill takes the specification content, the assessment objectives weighting, and the command verb requirements, and produces a draft paper: questions mapped to the specification, mark allocations aligned with the assessment strategy, command verbs used consistently, and the overall paper structured to assess across the full range of the mark scheme.
The specialist then applies their subject expertise: refining the stimulus material, adjusting the difficulty balance, ensuring the paper assesses authentically, and checking that no question overlaps with recent series. The production work is accelerated. The professional judgment work gets more time.
Example: Qualification Specification Drafting Skill
This is where product development accelerates dramatically. Developing a new qualification specification typically involves months of work: market research, sector consultation, learning outcomes drafting, assessment criteria development, unit structuring, guided learning hours calculation, regulatory mapping, and the submission documentation for Ofqual.
The specification drafting Skill takes the market research outputs and sector consultation findings and produces a structured first draft: learning outcomes written to Bloom’s taxonomy conventions, assessment criteria mapped to the outcomes, units structured with appropriate GLH and credit values, assessment strategy outlined, and the Ofqual submission checklist pre-populated. What used to be a three-month drafting process can be compressed significantly, with the subject specialists focusing their time on the content decisions rather than the document production.
Example: EPA Specification Development Skill
An EPAO developing a new EPA specification needs to translate an apprenticeship standard into a valid, reliable, and deliverable assessment. The EPA development Skill takes the standard, the assessment plan, and the grading descriptors, and produces a draft specification: assessment methods mapped to KSBs, assessment criteria written for each method, grading descriptors operationalised into observable indicators, examiner guidance drafted, and sample assessment materials outlined. The development team reviews against their professional assessment expertise rather than starting from a blank document.
Example: EQA Report Skill
An external quality assurer has conducted a centre visit. They have their notes, their sampling findings, and their observations. The EQA report Skill takes these inputs and produces a structured report in your organisation’s standard format: centre context, sampling outcomes, conditions and recommendations with specific references to the relevant quality assurance criteria, areas of good practice, action plan requirements, and risk rating. The report that took two hours to write takes thirty minutes to review and refine.
Example: Centre Approval Skill
Centre approval applications arrive with varying levels of completeness and quality. The centre approval Skill reviews the application against your approval criteria, flags gaps or concerns, identifies areas requiring further evidence, and produces a structured assessment summary with a recommendation. The approvals team focuses on the judgment calls rather than the document review.
Each Skill compounds over time. The question paper Skill that works for a Level 2 qualification gets adapted for Level 4. The EQA report Skill gets tuned for different qualification types and centre risk profiles. After six months, the Skill library represents a significant portion of the organisation’s operational methodology, encoded and consistently applied.
Layer 4: M365 Integration
Claude Team Plan connects natively to Microsoft 365: Outlook, SharePoint, OneDrive, Teams. For an awarding organisation, this means Claude can access your document libraries, search your correspondence with centres, summarise regulatory communications, and work within your existing file structure. No switching between systems. No copying content into a separate tool.
Enjoyed this? Join the newsletter.
One email a week. What I'm building, learning, and what's actually working. No fluff.
Free. Unsubscribe anytime.
The Product Development Opportunity
This is the angle that most awarding organisations have not yet considered, and it may be the most valuable.
The speed at which an awarding organisation can develop and bring a new qualification to market is one of the most significant competitive variables in the sector. The organisation that can identify market demand, develop a specification, create assessment materials, submit for regulatory approval, and launch to centres faster than competitors wins the market. In sectors where skills needs are evolving rapidly (technology, digital, green energy, health and social care), the development cycle is the bottleneck.
A properly configured Claude implementation compresses the product development cycle at every stage. Market research analysis is faster. Specification drafting is faster. Assessment material development is faster. Regulatory submission documentation is faster. Centre support materials are faster. None of these replace the professional expertise required at each stage. All of them reduce the production overhead that slows the pipeline.
For an EPAO, the same logic applies to EPA specification development. The organisations that can respond to new apprenticeship standards quickly, develop valid and reliable EPA specifications, and be ready to assess when the first cohorts reach gateway have a structural advantage over those that take twelve months to develop each new product.
The Economics
A Claude Team Plan costs $20 per user per month. A team of twenty staff costs roughly £3,800 per year. That is a fraction of the cost of a single new product development project. The return depends entirely on implementation quality.
The calculation for an awarding organisation is slightly different from other professional services. It is not just about time saved on existing operations (though that is significant). It is about what becomes possible when operational overhead drops and product development capacity increases. If you can bring two additional qualifications to market per year because your development team spends less time on production and more time on the decisions that require expertise, the revenue impact dwarfs the cost of the tool.
For EPAOs, the arithmetic is even more direct. Each new EPA standard you can offer represents a new revenue stream. The speed at which you can develop, gain recognition, and be ready to assess determines how much of that market you capture. Compressing the development cycle by even 30% changes the competitive dynamics.
The Adoption Gap
Anthropic, the company behind Claude, now holds 73% of the enterprise AI market, up from 40% just three months ago. Their platform does not train on your data. It offers SSO, centralised admin controls, and usage analytics. For an organisation handling assessment materials, question papers, and regulatory documentation, the data governance is robust.
The barrier is not cost, security, or technology. The barrier is implementation. Someone needs to understand how your organisation actually operates (the product development pipeline, the assessment cycle, the quality assurance framework, the centre management process), configure the tool around those workflows, build the Skills that encode your organisational standards, and train your people so that adoption sticks.
The organisations that buy seats and experiment will see marginal improvement. The organisations that implement properly will operate at a fundamentally different level: faster product development, more consistent quality assurance, better centre support, and teams that spend their expertise on the work that actually requires expertise.
What Proper Implementation Looks Like
If you are a CEO, head of product, or quality director reading this and thinking about making a move, here is what doing it properly involves.
First, map the real workflows. Product development. Assessment material production. Quality assurance. Centre management. Regulatory compliance. Understand where time is spent, where bottlenecks exist, and where the highest-value tasks are being crowded out by production work.
Second, configure Claude around your organisation. Load your qualification specifications, your assessment conventions, your house style, your regulatory frameworks, your quality criteria, your best examples of assessment materials and reports. The depth of the configuration determines the quality of every output.
Third, build Skills for your highest-impact workflows. Question paper development. Specification drafting. EQA reporting. Centre approval. Mark scheme development. EPA specification creation. Each Skill encodes how your organisation does this work, at the quality standard you expect.
Fourth, train the people individually. A subject specialist needs a different configuration to a quality assurer. A product developer needs a different setup to a centre support manager. Individual configuration is where adoption becomes permanent.
Fifth, sustain and expand. Start with the workflows that have the highest volume or the biggest bottleneck. Build confidence. Expand into product development acceleration. Keep building the Skill library. The organisations that treat this as a one-off project will see one-off results. The ones that build it as an ongoing capability will compound their advantage every quarter.
The Question That Matters
The question facing UK awarding organisations and EPAOs is not whether AI will change how qualifications are developed and assessed. That question has been answered. The question is whether you implement it properly (configured around your specific operational and product development workflows, with the depth that produces real results) or whether you experiment at the edges and watch competitors move faster.
I work with professional services organisations to implement Claude properly. A focused enablement sprint that configures the tool around your operation, builds the Skills your team needs, loads your content, and trains your people so that by the end of the engagement, every person has a working system they will actually use.
Not a product. Not a training day. Not a slide deck. A working implementation, configured to your organisation, ready to use.