If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.
Your weekly immersion in AI
Imagine a group of cartographers mapping a new continent – but instead of coastlines and mountains, they’re charting how intelligence itself spreads across our digital world.
Well, that’s kind of what the next generation of AI governance looks like.
There are tech giants and regulators in the mix, of course. But there’s also a diverse network of researchers, collectives, and advocates – each asking how we develop and deploy generative AI responsibly, safely, and creatively.
Here are five organisations (some big, some beautifully niche) that are shaping the answer.
GovAI is kind of like AI’s quiet conscience. Born out of the University of Oxford, it’s now an independent research institute building the field of AI governance itself.
Its team works across political science, economics, computer science and law to answer tough questions: How do we govern advanced AI systems? Who should be accountable for their decisions?
GovAI’s papers and policy frameworks have shaped thinking across borders. For experts seeking depth and rigour – this is a trusted voice in the field.
While many organisations study today’s AI challenges, Schmidt Sciences asks what the world might need from AI in 2050.
The AI2050 programme funds researchers tackling the ‘hard problems’ – from alignment to AI safety science – across universities including MIT and Stanford.
Their philosophy is that the future of AI should be beneficial by design. It’s philanthropy meeting foresight – and a sign that long-term thinking is being resourced at scale.
If GenAI governance sometimes feels dominated by billion-dollar labs, EleutherAI offers a valuable counterbalance.
What began as a grassroots Discord community is now a non-profit research lab developing open-source large language models and datasets like The Pile.
Their current focus on interpretability and alignment makes them a vital technical counterpart to policy-driven governance. In short: they’re proving that openness and rigour can coexist – and that transparency is critical for safety.
AI reflects our societies back at us. The Algorithmic Justice League, founded by researcher and artist Joy Buolamwini, shines light on the biases hidden in data and algorithms – including those used in GenAI systems.
AJL combines empirical research, advocacy and storytelling to expose where algorithms fail fairness tests; from recruitment tools to creative AI. Their message is simple but powerful: if AI is to serve everyone, it must see everyone.
A newer but fast-rising player, IASEAI convenes governments, academics and industry to build shared safety and ethics standards for AI worldwide.
Their 2025 inaugural conference in Paris brought together researchers from the OECD, UN, and industry labs to discuss interpretability, disinformation and global coordination.
It’s one of the few bodies trying to bridge global perspectives on AI ethics – and a reminder that good governance must cross borders as readily as the tech itself.
Generative AI is fast becoming a part of our global digital infrastructure. And that makes independent oversight (from open-source researchers, civil society advocates, and policy innovators) absolutely vital.
These organisations, in all their diversity, keep the ecosystem honest: questioning assumptions and making sure that progress in capability is matched by progress in care.
Because the future of AI shouldn’t be written by any one lab, government, or company. It should be co-authored by all of us.
We want to know your current go-to organisations for independent critical thinking about the future of AI.
See you back here next week.
A new wave of systems built around generative tech
A new wave of systems built around generative tech