
Welcome to the 9 new deep divers who joined us since last week.
If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.
If AI governance were aviation, we’d be watching history repeat itself. In the early days when excitement was at its peak, voluntary ‘fly safely’ pledges were enough to keep daredevil pilots in the air. Then came controlled test flights with a safety officer. And eventually, international standards and binding law gave us air-traffic control and an architecture of trust that made flight safe enough to go commercial.
And AI is following the same trajectory. Broad voluntary principles are maturing into supervised experimentation – and, most significantly, into binding statutes. As those statutes solidify, the big question is how governments will enforce them.
Our anchor here is the OECD’s working paper, a mapping tool for digital regulatory frameworks; which includes a pilot on efforts to regulate AI. Instead of just cataloguing policies, it x-rays them:
The pilot applies the tool to 13 whole-of-government AI efforts and identifies convergence toward proportional risk-based frameworks, growing use of regulatory experimentation, and varied inspection/enforcement arrangements.
UNCTAD’s Technology and Innovation Report 2025 describes the same evolution in plain terms: principles-based approaches (such as the 2019 OECD AI Principles); risk-based models that categorise systems as unacceptable, high, limited or minimal risk (as in the EU AI Act); and liability-based approaches that enable redress when harm occurs.
It also flags that leadership in international initiatives is concentrated among G7 economies – while many countries in the Global South remain under-represented.
Sandboxes have been the pragmatic bridge between ideals and law: giving us the space for time-limited trials under regulator supervision to learn what to permit, prohibit, or standardise. The OECD mapping explicitly tracks ‘regulatory experimentation’; and a companion Regulatory Sandbox Toolkit (July 2025) sets out design choices and operational lessons for authorities institutionalising sandbox-to-rulebook pipelines.
Binding statutes are now arriving on the scene. And the European Union’s Artificial Intelligence Act is an emblematic example. The Act entered into force on 1 August 2024 (Regulation (EU) 2024/1689). The compliance timetable is phased like this:
The OECD mapping also highlights the nuts and bolts that separate paper rules from real-world compliance – things like who inspects and sanctions, whether jurisdictions rely on existing regulators or create new powers, and how oversight is coordinated (including the EU’s AI Office, with specific authority over foundation models). This is where enforcement capacity (not just legal text) will differentiate jurisdictions.
And at the same time, interim implementation is getting practical. Stanford’s Artificial Intelligence Index Report 2025 notes that the European AI Office issued the first draft of a Code of Practice for General-Purpose AI in November 2024 to help providers demonstrate compliance until formal standards are finalised.
If we zoom out, the AI Index 2025 logs a flurry of governance milestones beyond the EU. There’s the Council of Europe’s legally binding AI treaty, the creation of the International Network of AI Safety Institutes, ASEAN–U.S. cooperation statements, and more. They’re all evidence of a broadening consensus on transparency, accountability, and safety, even as institutional designs differ.
UNCTAD’s analysis highlights an important inclusivity gap: 118 countries (mostly in the Global South) are party to none of seven major international AI governance initiatives it tracked, despite many providing essential inputs to the AI ecosystem (from data work to raw materials). The report urges more representative participation to avoid a fragmented patchwork of regimes.
But inside enterprises, regulation is already shaping behaviour. The AI Index cites survey data showing that 65% of organisations say GDPR influences their responsible-AI decision-making; and ‘regulatory uncertainty’ ranks among the top obstacles to implementing responsible-AI measures.
The aviation analogy holds: principles set direction, sandboxes are supervised test flights, and statutes/standards hard-wire risk controls and oversight – backed by networks of authorities that coordinate practice (much as ICAO did for aviation). For the next 12–24 months, three details will matter most:
Regulators are converging on risk-based models, but the how differs between authorities – especially when it comes to enforcement.
The three sources we’ve explored here are useful to all businesses deploying AI. You can use the OECD mapping to benchmark your jurisdiction and sector; use UNCTAD to stress-test inclusivity and development impacts; and use the AI Index to track where policy signals are moving markets and organisational behaviour.
In aviation terms: align with the global flight rules, but pay very close attention to your local tower.
We’ll see you back here next week.
A cybersecurity lawyer explains a shift in AI governance
A cybersecurity lawyer explains a shift in AI governance