
Strategy meets reality for AI in healthcare
WHO has set the vision, and hospitals and researchers are putting it into practice
Welcome to the 5 new deep divers who joined us since last Wednesday.
If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.
We talk a lot about what AI can do in healthcare, but there are fewer conversations happening about what doctors and patients want AI to do. Research published this year shows that trust is the biggest barrier to AI adoption – so if we want AI to truly make a difference to the healthcare sector, we need to refocus our minds: away from tech development, and towards building confidence.
A new Future Health Index from Philips explores the challenge of building trust in AI. It’s based on surveys of more than 16,000 patients and 1,900 healthcare professionals (HCPs) across 16 countries – and the findings are a cause for concern.
People are most positive if AI improves their health in an obvious way (45%), reduces medical errors (43%), shortens waiting times (43%), or frees up doctors’ time for face-to-face care (39%).
And when it comes to who they trust to explain how AI is being used, patients don’t want a glossy marketing campaign or a news story. They want their doctor.
That’s a heavy responsibility for healthcare professionals. And in order to act as the trust-builders for their patients, doctors need to trust AI themselves.
So the Philips study also asked them what they need for that to happen, and they said:
Notably, job security came at the bottom of the list. Doctors aren’t worried about being replaced by AI, but they are worried about being asked to use it without the right guardrails.
Last week we wrote about the World Health Organisation’s guidance for the ethics and governance of artificial intelligence across healthcare
The WHO identifies six core principles for trustworthy AI: protect autonomy; promote well-being and safety; ensure transparency and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; and promote sustainability.
It also warns of real risks that undermine trust – like ‘automation bias’ (the tendency to over-rely on machine outputs), and the degradation of clinical skills if AI replaces experience.
In Canada, CADTH (the nation’s drug agency) publishes an annual list of technologies to watch. For 2025, the top five AI applications in health include notetaking, clinical training, detection and diagnosis, treatment planning, and remote monitoring.
But alongside the opportunities, CADTH flags five urgent issues:
And those issues map almost perfectly onto what healthcare professionals told Philips they need in order to trust AI.
We’re at a crossroads. On one side, patients are waiting too long, and clinicians are drowning in admin. AI could help fix both problems – from automating repetitive tasks to predicting deterioration and reducing admissions. On the other side, people won’t accept AI unless they’re confident it’s safe, explainable, and overseen by the humans they already trust.
Which means the future of healthcare AI relies on building trustworthy systems, that are…
As the Philips report shows, trust is the bottleneck for AI adoption. Crack that, and healthcare AI can ramp up its real-world impact.
We know our DeepFest audience is pretty positive about the future of AI. So we want to know how you feel – do you (or would you) trust AI tools in your clinical care right now; and if not, why?
WHO has set the vision, and hospitals and researchers are putting it into practice
WHO has set the vision, and hospitals and researchers are putting it into practice