
Welcome to the 9 new deep divers who joined us since last Wednesday.
If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.
AI is driving progress in cancer treatment, cardiovascular diagnostics, and clinical workflows. But while innovation continues to accelerate, we have to focus on safety and trust.
Last year, the WHO published its guidance for the ethics and governance of artificial intelligence across healthcare – setting out principles for fairness, transparency, accountability, and inclusivity in the deployment of large AI systems in healthcare. It calls for the independent validation of AI tools, and strong mechanisms for explainability; as well as capacity-building support for countries that need it.
The balance between innovation and oversight is at the heart of debates about AI right now. And in healthcare, it’s incredibly important. Because we have to enable both innovation and access to AI tools, without leaving any patients behind.
One of the clearest examples of AI’s clinical impact comes from Addenbrooke’s Hospital in Cambridge, UK. Traditionally, radiotherapy planning for cancers such as prostate or head-and-neck takes hours of painstaking work. Now, with the help of an AI tool called Osairis, that time can be cut to minutes.
Osairis is built on deep learning segmentation models, trained on anonymised scans, to automatically map out tumours and organs at risk – a task that used to take hours of manual contouring by specialists.
In an interview with the Financial Times, Dr Raj Jena (the UK’s first clinical professor of AI in radiation oncology) explained why this matters for patients. Faster planning saves clinicians’ time; and it also means treatment can begin sooner, reducing stress and improving outcomes. A second platform, Apollo, is also in development to connect AI developers with clinicians early in the innovation process – to make sure tools are built to meet real needs.
These systems support human expertise, rather than replacing it – which is a principle that aligns closely with the WHO’s call for AI that is trustworthy, interpretable, and human-centred.
Also in the UK, at Imperial College London and Imperial College Healthcare NHS Trust, researchers have built an AI-powered stethoscope that’s capable of detecting serious heart conditions in just 15 seconds.
It combines ECG data with heart sound analysis, trained on a dataset of 1.5 million patients, enabling the stethoscope to spot signs of atrial fibrillation or heart failure faster than conventional methods. For doctors, that means earlier intervention – and potentially life-saving outcomes.
A report by the New York Post emphasised its promise, but pointed to challenges too: while accuracy is high, false positives are still a problem, and some clinicians stop using the device after a year. It’s a reminder that AI needs to be as practical as it is powerful – trust is essential for any tech to be adopted in day-to-day medical practice.
These examples both show the delicate balance between innovation and oversight in action. And the WHO’s work on AI in health is designed to help governments and health systems walk that line: setting ethical standards globally, while enabling local adaptation.
Because the reality is that AI in healthcare can’t just serve wealthy hospitals in developed nations. Its true potential lies in democratising access to quality care – by supporting rural clinics, strengthening health systems in low- and middle-income countries, and providing diagnostic tools to communities that don’t have access to specialist doctors.
The WHO strategy shows us the ‘what’ and ‘why’: ethics, inclusivity, and accountability as the foundations for AI in health.
And the Addenbrooke’s and Imperial examples show us the ‘how’: innovation deployed responsibly to support clinicians and improve patient care.
Together, they raise an urgent point: AI in healthcare is very much here. The real challenge now is to make sure it works for everyone, everywhere.
At DeepFest, we’re digging into this intersection of strategy and practice – and we’d love your input.
If you could design one global principle for AI in healthcare, what would it be? Equity, safety, innovation – or something totally different?