
Welcome to the 6 new deep divers who joined us since last Wednesday.
If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.
Within the AI sector, there’s incredible positivity about the potential of AI in healthcare. But we need to stay in touch with the people who are actually working with patients every day (doctors, nurses, medical students and healthcare assistants), and make sure their perspectives are included in AI development.
We haven’t (yet) been interviewing doctors firsthand. But this week we headed to Reddit, where clinicians share their unfiltered experiences on subreddits like r/medicine and r/familymedicine. We wanted to find out what they really think of AI scribe tools that are increasingly being rolled out across medical practice.
And what we’ve gathered is a sense of hope and frustration at the same time: AI as a time-saver, a burden-shifter, a security risk, and (sometimes) a real frustration.
AI leaders talk about the power of the tech to streamline work and lighten the load for healthcare professionals. Ambient AI scribes (software that listens during consultations and drafts the clinical notes) are one of the most talked-about use cases right now.
On the subreddit r/medicine, one doctor said:
“It saves me time and significantly reduces my cognitive burden from charting.” They added that it’s “not perfect, you still proofread everything.”
In the same thread, another said,
“I spent more time editing the AI note than it takes me to dictate using regular [software].”
Tools like Nuance DAX and Abridge get plenty of mentions. Some clinicians describe them as “game-changing,” while others complain about hallucinations, formatting quirks, and the endless friction of getting them to play nicely with Epic (a dominant electronic medical record software).
In r/FamilyMedicine, one clinician noted just how important it is for these tools to work seamlessly with existing systems:
“Having used Ambience for a few months now, I’ve been quite happy. It’s fairly accurate and getting better with each version…and finally integrated into the EMR which has saved a lot of time.”
That point comes up again and again: when AI scribes connect smoothly with electronic medical records, the benefits compound. Without integration, they risk becoming just another disconnected layer of admin.
Other clinicians emphasise that the technology isn’t perfect – but the time saved outweighs the frustrations. As one family doctor put it:
“DAX has been fantastic so far. You have to go in with the expectation that notes will be longer than what you are used to, but the benefits of total time saved on documentation far outweigh the downsides.”
Longer notes are a small price to pay for fewer late nights catching up on charts.
Possibly the strongest endorsement on Reddit came from a GP who’s been using an AI scribe for some time:
“After using AI scribe personally though if you told me I had to stop, it would feel like being told let’s go back to bloodletting.”
They described inboxes that stay empty, charts that are done on time, and evenings no longer lost to paperwork. Once clinicians experience that shift, it’s hard to imagine returning to old routines.
But even the most positive users are quick to stress that AI doesn’t eliminate the need for human oversight. One clinician, comparing different scribe tools, summed it up like this:
“We utilise Freed AI and Deepcura. Personally prefer Freed AI, definitely saves a lot of time but isn’t foolproof.”
That phrase (‘isn’t foolproof’) could be applied across the board, wherever we use AI to streamline tasks in healthcare. The tools bring relief, but the clinician has to monitor AI outputs carefully.
AI scribes are already lightening the load for doctors; but integration challenges and the need for constant oversight mean they’re not a magic fix.
And within the bubble of the AI development sector, we need to keep listening to the voices of people using these tools every day: because when AI genuinely reduces cognitive burden, frees up time, and eases the administrative weight of clinical work – it’s incredibly valuable.
The common themes we picked up on across Reddit are also reflected in the emerging body of research about AI in healthcare, and in new guidance from governing bodies including the World Health Organisation (WHO). In short, it’s needed – and at the same time, we have to protect the roles of human clinicians in the loop.
WHO has set the vision, and hospitals and researchers are putting it into practice
WHO has set the vision, and hospitals and researchers are putting it into practice