Towards A Fair world: Can AI Overcome Systemic Bias?

Towards A Fair world: Can AI Overcome Systemic Bias?

Saad Toma (General Manager at IBM) came to #LEAP22 to talk about AI. And with good reason: “Twenty years ago the prediction was that every company would become an internet company,” he said. “Now people are saying every company will become an AI company.” 

In 2019 research by Accenture found that 75% of executives believed they’d be at risk of going out of business in five years if they didn’t scale AI. In the same year, 37% of organisations surveyed by Gartner were using AI in the workplace, compared with only 10% in a similar survey four years prior. 

Now in 2022, research collated by CompTIA suggests that AI has become mainstream tech in business (86% of CEOs said so), and 91.5% of leading businesses invest in AI on an ongoing basis. 

AI is part of the future across industries and in government operations. And that means it’s incredibly important that AI treats data in an unbiased way. 

Is there bias in AI?

AI is used in a multitude of different ways to analyse various data sets, for a plethora of purposes, so the importance of fair analysis varies between use cases. But bias definitely exists in AI – and rather than a technical problem, it needs to be approached as a human issue. 

Toma cited loan applications as an example of how AI bias could negatively affect people’s lives. “The bank institute could potentially use data gathered on demographics, and certain areas of a particular city, or a state, or a country, could be deemed as unattractive in terms of credit worthiness,” he pointed out, “because of poor conditions maybe, or education standards,” resulting in higher interest rates being unfairly applied – or loans being flat-out denied. 

You might assume that such a bias in an AI model must have come from machine learning; the modelling has encountered those demographics before, gathered data on them, and come to the mathematical conclusion that individuals who share those demographic data points are a high risk category for money lending. But a new report on AI bias by the US National Institution of Science and Technology (NIST) found that if we’re to successfully manage AI bias, we also have to address the human and systemic biases that feed into the machine learning models we create. 

One of the report’s authors, Reva Schwartz (Principal Investor for AI bias at NIST) noted that context is everything: “AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI.” 

Those factors might include the technology itself, but they also include the impact the technology has; and it’s those real-life negative impacts that stick in people’s minds. Studies (like this one by researchers at UC Berkeley) have found racial bias in mortgage algorithms that result in Black and Latinx borrowers being charged higher interest rates. And the University of Melbourne identified shocking cases of recruiting algorithms working with a strong bias against hiring women. A series of studies on facial recognition software, also by NIST, found that darker-skinned women were misidentified 37% more often than women with lighter skin tones; and an AI application that’s routinely used in America to predict clinical risk has been found to cause inconsistent referrals to specialists, dependent on the race of the patient.

These are just a few of many examples of AI perpetuating, and even exacerbating, existing biases in society. But what’s the solution?

How to make AI fair

The reality is that making AI completely fair is a huge task, because human beings are not completely fair. AI has to operate within the societal systems it’s created in. But in an ideal world AI would enable us to spot and override human bias – rather than sinking us deeper into a rigged system. 

First, we have to understand that AI cannot understand fairness/unfairness from a human perspective. Machine learning algorithms aren’t trying to privilege one demographic over another; they’re just using the data available to them in the way they’ve been designed to use it. 

Logically, then, we need to do everything we can to provide AI models with balanced and unbiased data. If you feed AI with biased data then the outcome of AI analysis will also be biased. Then, with data provided in the most balanced format possible, humans have to continue to question AI results and provide feedback to enable the model to be altered to integrate greater fairness, and minimise bias points. 

According to Toma, AI must be open to inspection, and it must be explainable. Open to inspection means that “a consumer, an end user, or a government institution could and should know how that data is being used, and how AI is being implemented across that data platform.” And explainability is “about inputs and outputs” – every interested person should be able to understand when, how, and why data is entered into an AI system, then what happens to the data before AI arrives at an outcome.  

This drives transparency. Everyone involved gets to participate in the AI model with an awareness of the impact it could have, and with the right to question that impact (and by extension, feedback into the modification of AI design). 
AI can’t just be created and then put out into the world to do its thing. We need to engage in a continuous cycle of use testing and curiously questioning what AI is doing, and why. And when we do that, we can build AI models that are better able to navigate systemic bias without strengthening that bias even more.

Related
articles

Will you write the next AI novel?

Welcome to the 118 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed. DeepDive Your weekly immersion in AI.   Since ChatGPT was released in November

How the tech behind the FIFA World Cup will shape the future of Qatar

Oscar Barranco Liébana (Integrated Operations Platform Director, FIFA World Cup Qatar 2022) undertook an immense technological challenge in 2022 – enabling the smooth running of the FIFA World Cup in Qatar through integrated technology systems that covered everything from entertainment and accommodation to transport, sustainability, and much more.  The Integrated Operations

AI startups: The difference between good and bad ideas

Kablan’s experience building and exiting successful startups serves as inspiration to upcoming AI entrepreneurs. We asked him what led him to AI, and why he’s focusing on the intersection of AI and blockchain in 2024.