An Open Letter on the AI Skills Hub: Ambition Is Not Enough

Dear Kanishka Narayan MP, 

I am writing in response to the launch of the AI Skills Hub and the government’s commitment to provide free AI training to 10 million workers by 2030, and your role as Parliamentary Under-Secretary of State (Minister for AI and Online Safety).

AI has the potential to reduce barriers to work, particularly for disabled people and others who are currently economically inactive but want to participate in employment. Used well, AI can support accessibility, reduce administrative burden, enable flexible working, and act as a reasonable adjustment in many roles. That is why this initiative matters, and why I want it to succeed.

I work with organisations on accessibility, bias, culture, and responsible AI use. I am not opposed to AI, nor to industry involvement, cross-participation is crucial to building better systems that work for users. I am however concerned with whether public AI initiatives are designed with the people they are meant to serve in mind, and whether they meet basic standards of quality, transparency, and value for money.

Having now reviewed the AI Skills Hub in detail, and in light of recent investigative reporting, I am concerned that the platform does not yet meet those standards, particularly given the reported £4.1 million of public funding allocated to its development. The issue is no longer simply that the Hub relies heavily on existing courses from technology companies and educational institutions. Reuse of existing material can be sensible and efficient. The problem is that the Hub appears to lack meaningful curation, quality assurance, and learner-centred design.

Independent scrutiny has highlighted content that is misleading, irrelevant, or clearly unsuitable for a national skills programme. This includes learning offers that do not exist, material that is decades out of date, courses that appear to provide little or no recognised value despite significant cost, and international training options that are inaccessible to UK learners due to geographic or eligibility constraints.

Taken together, this gives the impression of a platform assembled quickly by aggregating content, rather than a carefully designed skills offer built around the needs of UK workers. For people approaching AI for the first time, including many disabled people, older workers, or those returning to work, this creates confusion, not confidence.

It is also concerning that the government response has framed the Hub as intentionally focused on “deep and specialist expertise”. That framing sits uneasily alongside public statements describing the offer as free AI skills training for millions of workers. Specialist training has its place, but it cannot substitute for a clear, accessible foundation that helps people understand what AI is, how it can be used safely at work, and how to think critically about its limits, risks, and appropriate use.

From an AI governance perspective, this matters. If we want responsible adoption, we need skills provision that includes judgment, context, and challenge, not just exposure to tools or vendor narratives. At present, the balance feels skewed towards the priorities of technology providers rather than the realities of workplaces and workers.

There is also a basic product question here. If a private sector AI company had released a platform containing broken links, misleading descriptions, inaccessible courses, and content of questionable relevance, it would rightly be subject to serious internal scrutiny. Public sector digital products should be held to at least the same standard, particularly when they are positioned as flagship national initiatives.

I would therefore welcome clarity on a number of points:

  • What quality assurance processes were applied before content was included on the AI Skills Hub?

  • How is relevance to UK workers assessed, particularly for international or specialist offerings?

  • Were disabled people, economically inactive individuals, and organisations supporting them involved in shaping the learning pathways?

  • Were any new courses commissioned specifically to address foundational AI skills gaps for beginners?

  • How is the claim of “free” training being defined and enforced across the platform?

I remain optimistic about what this initiative could become. With stronger governance, curation, and at least one genuinely beginner-level course developed specifically for this programme, the AI Skills Hub could play a meaningful role in supporting inclusive employment and responsible AI use across the UK workforce. At present, however, it risks undermining trust, both in AI skills policy and in the government’s wider approach to digital inclusion. That would be a missed opportunity at a time when confidence and credibility are essential. 

I would welcome the opportunity to discuss this further and to contribute to the development of an AI skills offer that is accessible, credible, and genuinely designed around the people it is meant to serve.

Yours sincerely,

Rachael Mole CF 

Managing Director, Moleworks Solutions 

Next
Next

More Than Reasonable: Practical steps for employers to get workplace adjustments right