In healthcare, patients want trust before tech…

In healthcare, patients want trust before tech…

Welcome to the 5 new deep divers who joined us since last Wednesday.

If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.


DeepDive 

We talk a lot about what AI can do in healthcare, but there are fewer conversations happening about what doctors and patients want AI to do. Research published this year shows that trust is the biggest barrier to AI adoption – so if we want AI to truly make a difference to the healthcare sector, we need to refocus our minds: away from tech development, and towards building confidence

A new report shows the trust gap is important 

A new Future Health Index from Philips explores the challenge of building trust in AI. It’s based on surveys of more than 16,000 patients and 1,900 healthcare professionals (HCPs) across 16 countries – and the findings are a cause for concern. 

  • Waiting times are crippling care. Globally, 73% of patients have faced delays in seeing a specialist. On average, the longest wait is 70 days – and in Canada and Spain it stretches to around four months. A third of patients said their health worsened because of delays, and more than one in four were hospitalised as a result.
  • Data gaps waste precious time – 77% of HCPs say they lose time because data is incomplete or hard to access. For a third of them, that’s 45 minutes or more per shift – which adds up to four working weeks every year.
  • There’s a difference between confidence and comfort. Doctors are broadly optimistic about AI, but patients are cautious – especially when the stakes rise. For example:
    • 87% of HCPs are confident in AI for documenting notes, but only 64% of patients feel comfortable with it.
    • 88% of HCPs are confident in AI supporting diagnosis, while just 76% of patients are comfortable.
    • The smallest gap is in admin tasks like appointment booking, where AI feels less risky.

What do patients want from AI? 

People are most positive if AI improves their health in an obvious way (45%), reduces medical errors (43%), shortens waiting times (43%), or frees up doctors’ time for face-to-face care (39%).

And when it comes to who they trust to explain how AI is being used, patients don’t want a glossy marketing campaign or a news story. They want their doctor.

Clinicians are the trust-builders 

That’s a heavy responsibility for healthcare professionals. And in order to act as the trust-builders for their patients, doctors need to trust AI themselves. 

So the Philips study also asked them what they need for that to happen, and they said:

  • Clarity on legal liability if something goes wrong.
  • Clear usage guidelines that spell out when and how AI can be applied.
  • Scientific evidence that AI improves outcomes.
  • Continuous monitoring and evaluation to make sure systems are safe in practice.

Notably, job security came at the bottom of the list. Doctors aren’t worried about being replaced by AI, but they are worried about being asked to use it without the right guardrails. 

Which takes us neatly back to the WHO

Last week we wrote about the World Health Organisation’s guidance for the ethics and governance of artificial intelligence across healthcare 

The WHO identifies six core principles for trustworthy AI: protect autonomy; promote well-being and safety; ensure transparency and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; and promote sustainability.

It also warns of real risks that undermine trust – like ‘automation bias’ (the tendency to over-rely on machine outputs), and the degradation of clinical skills if AI replaces experience. 

AI on the watch list 

In Canada, CADTH (the nation’s drug agency) publishes an annual list of technologies to watch. For 2025, the top five AI applications in health include notetaking, clinical training, detection and diagnosis, treatment planning, and remote monitoring.

But alongside the opportunities, CADTH flags five urgent issues: 

  1. Privacy and data security
  2. Liability and accountability
  3. Data quality and bias
  4. Data sovereignty and governance
  5. The environmental costs of large-scale AI 

And those issues map almost perfectly onto what healthcare professionals told Philips they need in order to trust AI.

So where does that leave us? 

We’re at a crossroads. On one side, patients are waiting too long, and clinicians are drowning in admin. AI could help fix both problems – from automating repetitive tasks to predicting deterioration and reducing admissions. On the other side, people won’t accept AI unless they’re confident it’s safe, explainable, and overseen by the humans they already trust.

Which means the future of healthcare AI relies on building trustworthy systems, that are…

  • Transparent about how they’re used.
  • Governed by clear rules and liability frameworks.
  • Audited for safety, bias and equity.
  • And always keeping the clinician–patient relationship at the centre.

As the Philips report shows, trust is the bottleneck for AI adoption. Crack that, and healthcare AI can ramp up its real-world impact.

How do you feel about AI in healthcare? 

We know our DeepFest audience is pretty positive about the future of AI. So we want to know how you feel – do you (or would you) trust AI tools in your clinical care right now; and if not, why?

Related
articles