From literacy to liability…

From literacy to liability…

Welcome to the 11 new deep divers who joined us since last Wednesday.

If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.


DeepDive 

When we asked Lee Tiedrich (AI expert for the OECD, advisor at NIST, and professor at Duke University) if there’s one thing she wishes everyone knew about AI, she said: 

“I wish everyone knew that there is much they can learn about AI to help them understand how it’s transforming people’s lives. Knowledge can empower people to make choices that will help them make the most of AI’s benefits and protect themselves against the harms and risks.” 

Tiedrich wanted everyone to understand that there are resources out there that can be used to increase personal knowledge. But this is an important note for business leaders too. 

Because when business leaders embrace AI, they often focus on potential performance gains and competitive edge; but today, AI literacy is also a form of risk management. And every organisation should be leveraging educational resources to upskill their employees. 

Why AI literacy is becoming a regulatory imperative 

The OECD and European Commission’s AI Literacy Framework (AILit Framework), released as a review draft in May 2025, presents a structured, globally informed approach to what AI literacy should entail. 

Designed for educators, policymakers, and designers, it identifies 22 competences across four domains

  1. Engaging with AI
  2. Creating with AI
  3. Managing AI
  4. Designing AI

It’s intended to inform curriculum, assessment, and policy design worldwide. And it contributes to the upcoming PISA 2029 Media & AI Literacy (MAIL) assessment, with a final version scheduled for launch in 2026, alongside exemplars for education systems seeking to integrate these competencies. 

At the same time, the EU’s AI Act (in force since August 2024) mandates that both providers and deployers of AI systems must ensure that their staff – and anyone using the systems on their behalf – have a sufficient level of AI literacy. 

In other words, literacy is becoming a regulatory requirement. 

What the public thinks (and why it matters)

A poll of public opinion as part of the Stanford HAI AI Index 2025 reveals that around the world, optimism about AI is rising; but at the same time, trust is faltering.

Across 26 nations surveyed between 2022 and 2024, the share of people who believe AI products and services ‘offer more benefits than drawbacks’ has increased from 52% in 2022 to 55% in 2024. And about two-thirds of respondents now believe that AI will significantly impact daily life within the next three to five years (that’s a rise of six percentage points compared to 2022). 

But public confidence in AI firms to protect personal data fell from 50 % in 2023 to 47 % in 2024, and fewer respondents believe AI systems are unbiased and free from discrimination. These declines suggest an erosion of trust which is likely to continue; and in turn, it’ll increase the regulatory and reputational risks that come when an organisation deploys AI. 

A checklist for AI literacy 

AI literacy is becoming a legal and reputational shield for all organisations that adopt AI technologies. So those organisations need to move beyond generic training, and align AI literacy with their operational and regulatory requirements. 

They can do this through…

  1. Role-specific literacy mapping
    Align the four AILit domains to corporate functions: legal, risk, product, HR, compliance. And understand exactly what ‘managing AI’ looks like for each team.
  2. Evidence-backed training
    Embed assessments, badging, or scenario-based drills that mirror AILit competences. Evidence of literacy may form part of compliance audits under the EU AI Act; and other regions around the world are likely to implement similar audits going forward.
  3. Public perception monitoring
    Use longitudinal data from AI Index surveys to understand public trust trends – this informs risk assessments and helps anticipate regulatory scrutiny.
  4. Transparency and documentation
    Demonstrate organisational understanding of AI risks and benefits. By sharing training outcomes and literacy initiatives in annual reports or regulator filings, organisations can show proactive governance.

AI learning should be engaging, and even fun

Tiedrich pointed out that “learning about AI can be both interesting and fun.” 

And instead of letting that go as a throwaway comment, business leaders should embrace it. Because if we’re brutally honest, compliance-driven training is often very boring – and that’s one of the reasons it fails. 

There’s no need for AI training to be boring. You have a vast array of tools at your fingertips; from sandbox environments to simulations that bring AI literacy to life. And when you offer learning opportunities that people actually look forward to, your team will be able to internalise their new knowledge and apply it across your organisation. 

AI literacy is a new form of liability mitigation in regulated and dynamic environments. If you want your team to help you reduce exposure to public trust volatility and make informed decisions in the face of evolving risks, then you need to make the most of the tools that are out there – and create learning programmes that mean something.

Join us in Riyadh for DeepFest 2026 to explore the links between AI skill and risk management.

Related
articles