Our favourite moments from DeepDive 2025

Our favourite moments from DeepDive 2025

If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.


DeepDive 

Your weekly immersion in AI 

If 2025 had a soundtrack, it would be the hum of human questions. Who gets to control AI? What do we owe each other as this technology accelerates? And how do we make sure that the next wave of tools elevates human capability, rather than eroding it? 

To answer these questions (and many more), we spent this year in conversation with DeepFest speakers – researchers, founders, policy experts, storytellers, and builders. And each of them has offered a different way of seeing where we are and where we’re heading. 

As we get close to the end of the year, we’ve pulled together seven moments that stayed with us. Because they were honest, sharp, and revealed unique perspectives. 

1. Mia Shah-Dand on systems not designed for everyone

Mia Shah-Dand (CEO at Lighthouse3; Founder at Women in AI Ethics™) articulated the lived consequences of exclusion in AI very clearly. Her work consistently draws attention to who is centred (and who isn’t) when technologies are built. 

She said: 

“Women are at the forefront of AI Ethics because many scholars and researchers who interrogated these systems found that they are not designed or optimised for us.”

Bias shows up in real systems and affects real people.  And meaningful progress requires more than good intentions – it demands different people at the decision-making table.

2. Lee Tiedrich on the hidden environmental impacts of AI

For Lee Tiedrich (AI expert for the OECD, and Professor at Duke University, to name just two of her hats), sustainable AI revolves around how systems change human behaviour, and how those behavioural changes ripple outward.

As she explained: 

“Indirect impacts include how the AI system’s use influences behaviours – such as encouraging or discouraging reliance on gasoline-powered vehicles as opposed to more sustainable alternatives.”

It was one of the most nuanced reflections we heard this year: helping us understand that sustainability is a moving ecosystem of design choices, incentives, and feedback loops.

3. Marcello Mari on transparency as a tool for accountability

Bias is often invisible unless you have the means to surface it. Marcello Mari (Founder and CEO at SingularityDAO) argues that blockchain offers one of those means – an auditable footprint of how AI acts and why.

His standout line:

“By recording AI model outputs, decisions, and reasoning on the blockchain, biases in decision-making processes can be identified and challenged.”

It’s a vision of accountability that doesn’t rely on trust alone, but on traceability.

4. Henrik Kniberg on the moment generative AI changed everything

Some interview moments stop us in our tracks because they capture the intensity of encountering a new technology for the first time. Henrik Kniberg (Co-Founder of Ymnig.ai) did exactly that.

He said:

“When ChatGPT 4 came I took a closer look at the emerging field of generative AI. I shut off my phone and locked myself away in a cabin for a week to really understand how this works.”

The sentiment hit us hard – the sense that something big had arrived, and it required our full attention.

5. Aadil Jaleel Choudhry on building AI for justice systems

In a year when AI governance and public sector transformation dominated headlines, Aadil Jaleel Choudhry (Co-Founder and CEO at vResolv.io) reminded us what this work looks like on the ground.

His team’s approach:

“We collaborated with a Supreme Court legal advisor to deeply understand the judicial landscape, identifying pain points that technology could alleviate.”

There’s something powerful about AI builders who start not with the algorithm, but with the people, processes, and institutions they hope to support.

6. Marcello Mari on AGI as a public good

Later, we spoke to Marcello Mari again – this time about the long-term trajectory of AI and blockchain. In one sentence, he captured the stakes of AGI development.

“AGI will be the most important technology ever created by humanity and we can’t afford it to be developed within closed doors.”

This was one of the clearest calls for openness we heard all year.

7. Solange El Rassi on journalism’s irreplaceable human core

Among all the debates about automation, Solange El Rassi (Journalist, Storyteller, News Anchor, and PR & Marketing Expert) offered a perspective grounded in craft.

She told us:

“Reporting the news goes beyond data – it involves capturing the essence of human experiences, contextualising complex issues, and connecting with audiences on an emotional level.”

It’s a reminder that some professions will always be profoundly human – and if we don’t respect that, we risk losing the stuff that makes them mean something. 

What do these moments mean? 

Across all these conversations, we found that 2025 has been a year defined by human reflection. People are asking better questions and demanding better systems. And we’re working together to imagine a future in which AI serves society – not the other way around. 

So as we head towards a whole new year, we’d love to hear from you: How has your perspective on AI changed over the last 12 months? 

We’ll see you back here next week.

Related
articles