
Welcome to the 7 new deep divers who joined us since last Wednesday.
If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.
Every new rule has a serious impact. Too much regulation strangles creativity. Too little lets chaos run unchecked. It’s the tension at the heart of today’s AI debate – and one that will define whether AI tech can reach its full potential.
The Stanford HAI 2025 AI Index paints a picture of the regulatory surge. In 2024, US federal agencies introduced 59 AI-related regulations – that’s more than double the year before. Across 75 countries, mentions of AI in legislation increased by 21.3% (nearly nine times higher than in 2016).
Policymakers are racing to catch up with innovation. And in doing so, they could slow down the development of the technologies they’re trying to control.
Can we regulate AI without killing innovation?
One way forward comes from rethinking regulation not as a brake, but as a catalyst. The Future of Privacy Forum has made a case for regulatory sandboxes – controlled environments where innovators can test bold ideas, but under the supervision of regulators.
Sandboxes are designed to reduce legal uncertainty while stimulating creativity. They offer a safe space to experiment, prove value, and learn where the risks really lie – before rolling out technologies at scale. Importantly, they shift the tone of governance from prohibition to collaboration.
That collaborative spirit mirrors the scientific approach of Thras Karydis (Co-Founder and CTO at DeepCure Inc.). DeepCure uses AI to tackle the hardest challenges in drug discovery, and Karydis has seen first-hand how innovation flourishes when restrictive frameworks are removed in a thoughtful, careful way.
“Initially, we had no success when we searched a huge, commercially available library of 20 billion compounds,” he told us. “But when we allowed our AI to design compounds without limiting them to commercial libraries, we saw immediate success. Many of the AI molecules were potent, selective and/or predicted to be brain penetrant compounds. This was a huge ‘aha moment’ for us. That’s when we realised that we (and others) were holding back AI by imposing too many restrictions that were tied to past biases and historical compounds.”
In other words, progress came from questioning outdated boundaries that prevented AI from exploring new scientific possibilities.
There’s a critical difference between this and recklessly abandoning oversight and controls – it’s about intentionally creating the conditions and freedom for innovation to thrive, within safe boundaries. And public policy could work in a similar way.
Still, even the smartest sandbox has limits if regulation is imposed only from above. That’s why the Harvard Law Review has called for a model of co-governance, in which responsibility is shared across policymakers, technologists, civil society, and affected communities.
This model recognises that innovation doesn’t happen in isolation – and neither should governance. It resonates with the perspective of previous DeepFest speaker Yonah Welker (Explorer, Public Evaluator, Board Member – European Commission Projects), who has worked on European initiatives around accessible and human-centred technologies:
“Our objective is to avoid silos and connect all stakeholders together to ensure human-centred development and adoption,” he told us. For Welker, inclusive governance is essential if we want AI systems that work for everyone.
But if inclusivity is vital, so is speed. The rapid advance of AI (particularly generative models) means that policy can’t afford to lag too far behind. When we spoke to Sol Rashidi (Data & AI Advisor & Former Chief AI Officer, Estée Lauder Companies), she touched on this urgency:
“The pace of change is the fastest we've seen it and without proper guidelines, principles, protocols, regulation it can rocket launch into unbelievable innovation while also creating havoc on humanity if the caretaker of the inventions and AI capabilities don't have good intentions behind what they are creating.”
It’s so important to remember this – because AI is a technology with the capacity to change industries, economies, and societies at a speed we’ve never experienced before. Guidelines and regulations are absolutely essential scaffolding for safe progress.
To go back to our question (can we regulate AI without killing innovation?), the evidence suggests yes – but only if collectively, we’re willing to rethink what regulation means.
At DeepFest, we believe these shifts are necessary. They reflect the spirit of AI itself – adaptive, creative, and collaborative. And they reflect the spirit of the innovators and visionaries we talk to every day.
Which brings us neatly back to…you.
If you could design AI regulation from scratch, what would smart regulation look like to you?