We are heading into a world where artificial intelligence will increasingly determine issues with our employment, healthcare, financial services, and criminal justice. According to the Ericsson AI Industry Lab report, an average of 49 per cent of AI and analytics decision-makers said they planned to complete their transformation journey to AI by the end of 2020. Therefore, we must ensure that the insights from AI algorithms work in an inclusive and unbiased way.
What is AI bias? A bias is an anomaly in the output of machine learning (ML) algorithms. The aberration could result from prejudiced assumptions made by humans during the algorithm development process. AI is as good as the data that is fed into it. If the information contains instances of historical injustices and social inequities, they will inevitably find their way into the algorithm.
A very famous case is when Amazon tried to use AI to help in their hiring decisions. The company utilized historical data from about ten years to train the AI model. Unfortunately, the model contained preconceptions about women since the tech industry has generally been male-dominated. Therefore, Amazon’s recruiting system overwhelmingly selected male candidates over equally qualified female candidates. Amazon identified the problem and stopped using the algorithm for recruiting purposes.
The second flaw stems from incomplete training data for ML. An example is when university researchers sample students who do not necessarily represent the whole population and then infer broad-based conclusions from such surveys.
There’s growing global consensus to identify and solve issues that arise from AI bias, conscious of the harm that flawed algorithms can do.
Regulating a more ethical AI
Governments are now proactively working to establish ground rules for AI behavior. In April 2021, the European Commission launched its first-ever legal framework on AI, which it says will “guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU.” The approach will set strict requirements for AI systems based on a pre-defined level of risk. It also places an immediate ban on AI systems that are considered to “be a threat to the safety, livelihood and rights of people” – including “systems that manipulate human behavior, circumvent users’ free will and allow social scoring by governments”. Except other states to follow.
Even industry heavyweights are sounding the alarm bells. In a blog posting on ethics and AI, experts at tech giant Ericsson have outlined various methodologies such as efforts towards a prevention, detection and response framework similar to those already in place for other ethics and compliance programs, such as anti-corruption and prevention of tax evasion frameworks. The author says, “Trustworthiness is emerging as a dominant prerequisite for AI, and companies must take a proactive stance. If they don’t, we face a risk of regulatory uncertainty or over-regulation that will impede the uptake of AI, and subsequently societal growth.”
AI as a means to an end
There’s also a need to ensure that AI does not perpetuate the same social inequities it was supposed to solve. In a webinar, the president of Abu Dhabi, UAE-based Mohammed bin Zayed University of Artificial Intelligence (MBZUAI), Prof Eric Xing, notes that the current development of state-of-the-art AI models and even the definition of topics are owned by a few AI superpowers with vast resources, leaving AI solutions and outcomes out of reach of the general population. “Therefore, there isn’t enough balance or diversity. We need to make AI affordable, so it’s used and delivers value to the whole society, rather than being a tool only used by a small number of nations or corporations.”
Joy Buolamwini, AI researcher and contributor to the Netflix documentary Coded Bias, questions the very concept of AI inevitability. “One of the questions we should be asking in the first place is if the technology is necessary or if there are alternatives, and after we have asked that if the benefits outweigh the harm, we also need to do algorithmic hygiene. Algorithmic hygiene looks at who these systems work for and who it doesn’t.”
AI should not be an end in itself – instead, it should complement human efforts. This extends to addressing some of the inadequacies with AI, such as bias where humans and machines work together in mitigating bias. It should be baked into AI that if the algorithm notices an inconsistency, it should provide options so that humans can double-check or choose from a set of variables.
Technology is too important to be left to technologists. Currently, AI is the domain of PhD technologists and mathematicians. Sociologists, ethicists, psychologists, and humanities experts will need to join the ranks of AI development teams who can raise questions, illuminate possible blind spots and check assumptions to ensure such powerful tools are built upon a wide range of perspectives.