
Welcome to the 6 new deep divers who joined us since last week.
If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.
AI is becoming more deeply embedded in our lives every day. So we urgently need to make sure the systems we use work for everyone. To date, the reality is that existing AI tools don’t work for everyone; and we need international, cross-demographic collaboration to make sure that changes.
We’ve seen flawed facial recognition software that misidentifies people of colour, and speech recognition systems that struggle to understand neurodivergent speech patterns – to name just a couple of issues that have been uncovered. AI systems don’t just reflect the biases that already exist in society – they can grow those biases and increase their negative impact.
So how do we fix it?
Yonah Welker (Explorer, Public Evaluator, Board Member - European Commission Projects) explores the intersection of AI, policy, and accessibility in his work. And when we spoke to him, he made it clear that if we want to make AI better, we have to start by making data better.
“AI can do incredible things,” Welker said, “but only if we feed it the right data.”
If we look specifically at accessibility, for example, AI has the potential to transform assistive technologies and make life better for millions of people with disabilities:
“AI algorithms can be used to augment smart wheelchairs, walking sticks, geolocation and city tools, bionic and rehabilitation technologies, adding adaptiveness and personalisation.”
For people with hearing and visual impairments, AI-enhanced computer vision could turn sign language into text content, or turn pictures into sounds. And for those who are neurodivergent or have cognitive conditions, AI can offer support in an increasingly wide range of ways; including emotion recognition, articulation tracking, and communication aids.
Sounds great in theory; but providing this kind of support isn’t simple. Because no two people share the exact same accessibility needs.
“Despite the significant possibilities of AI, every disability is unique,” Welker added, “and may pose challenges for algorithms associated with proper recognition, analysis, predictions and outcomes.”
That uniqueness is often what stumps the systems. Algorithms rely on patterns – and if your body, voice, or behaviour doesn’t match the data the AI was trained on, the system fails. This might cause frustration, but it could also cause dangerous consequences: a voice command ignored by a car, for example; or a warning unrecognised by a visually impaired user; could have serious impacts.
We asked Welker why AI systems come up with incorrect outcomes, and he said it can be due to:
Importantly, these problems don’t always (or even often) originate from bad intentions. Developers simply don’t have access to high quality data that reflects a wide range of human experience. This was highlighted in a 2023 report from UNESCO which found that over 60% of AI systems lack meaningful datasets on disability and inclusion.
The availability of good, relevant, accurate data converges with other limitations affecting AI systems – like, as Welker noted, “constraints of technology or a system’s design, social effects; organisational, institutional and policy limitations.”
Here are five key ways we can improve AI data to support more inclusive outcomes:
Fixing the data is just one part of the solution. As Welker made clear, inclusive AI is a cross-sector challenge – and we’ll only meet it by working across disciplines, across communities, and without silos.
“To serve humanity, we need to ensure fairness, transparency and accountability for data, algorithms, systems and assessment.”
It’s a big task. But with a focus on data quality, and with the right voices in the room, we can build AI systems that truly work for everyone.
What’s the biggest obstacle standing in the way of inclusive AI right now? We want to know what you think.