Will people trust AI?

Will people trust AI?

Welcome to the 18 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed.


DeepDive

Your weekly immersion in AI. 

When we interviewed Angela Kane (Vice President at the International Institute for Peace, Vienna; Former UN High Representative for Disarmament Affairs), we asked her about the key obstacles standing in the way of AI being able to support positive relationships between individuals, organisations, societies, and countries. 

Kane said:

“The key obstacle is lack of trust within societies. How do we know if reported news are factual or misinformation? Can we be sure whether video clips or pictures on social media are real or synthetic media? How can we be certain that election outcomes are not manipulated by actors outside the country?”

“As AI assists in everyday tasks and decision-making, how can we be sure that those decisions – on our health, our social benefits – are not biased by obscure algorithms?” 

Building trust in AI is a long road

Already, the public has been exposed to the dangers of AI bias and misinformation – so we’re not starting from a clean slate. 

Building that trust is essential to facilitate the scaling up of AI innovation, and promoting equal and fair adoption of AI opportunities. This is true both within industries and among the general population. And technological advancements aren’t the only necessary route to improving trust – we also have to engage in trust-building work through behavioural change, organisational shifts, education, and AI governance. 

It starts with better data

The first layer of trustworthy AI has to be high quality data. This is fundamental: people need to know they can rely on the quality of the data that’s feeding AI models, because that’s what determines the quality of outputs. 

And while GenAI has given more people access to AI tools, it’s also exposed them to the pitfalls of bad data. Everyone has seen firsthand the unreliable outputs that come from poor data – so the dangers of AI are no longer hypothetical. 

Speaking to Forbes last year, Bruno Aziza (Partner with CapitalG) urged AI developers to step up their data quality initiatives – ensuring that data is reviewed and tested internally, and on a regular basis. This allows you to identify issues, blank spots, or inequalities within data before it reaches customers – and means that the quality of data can be proactively maintained. 

And accessible, transparent AI governance will provide necessary assurance

Within organisations and governments, and even between nations, AI governance must become more than a talking point. Governance mechanisms will allow for efficient risk management and the upholding of ethical standards that can reassure people they’re being well-protected – and that there are clear routes to resolution when something goes wrong. 

Adopting AI governance means that developers and organisations deploying AI can meet legal requirements and improve their systems – so those systems can become successful.

But perhaps more importantly, governance enables trust. Perceptions of AI are shaky; sometimes with good reason, and sometimes not. But clear governance helps to soothe those shakes and make people feel safe from the risks of malpractice or inefficiencies in AI models. 

Clarity of purpose will help foster positive attitudes and AI trust

Here’s a thing: AI’s everywhere, and people are using it for everything. That’s a powerful recipe for confusion and uncertainty

A clearly defined purpose for AI models within specific use cases, that allows people to clearly understand why AI is being used and the benefits it’s intended to bring, could help to refocus the public’s relationship with AI and create a sense of benefit that overrides mistrust. 

Purpose could stop everyone from feeling like AI trickery is hiding around every corner. They could relax a little more, knowing when and where and why they can expect to be exposed to AI tools (or asked to use them), and feel confident that this technology isn’t an insidious undercurrent in their lives – but instead a helpful, transparent layer of potential, opportunity, and growth. 

We want to know what you think

Can governance and purpose create a culture of trust around AI? Open this newsletter on LinkedIn and share your perspective in the comment section. We’ll see you there. 


Did you miss DeepFest 2024? Don’t worry – register now to secure your place at the 2025 edition. We can’t wait to see you there

Related
articles

A co-creative process between AI and humans

Welcome to the 4 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed. DeepDive Your weekly immersion in AI.  With a background in biotech and

3 Exciting developments in AI x Robotics

Welcome to the 7 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed. DeepDive Your weekly immersion in AI.  Advancements in AI systems are driving

How people to relate to AI (and why companies should care)

Welcome to the 2 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed. DeepDive Your weekly immersion in AI.   A lot of the discourse around