AI, AGI, and Ethical Issues

AI, AGI, and Ethical Issues

“The journey toward human-like artificial intelligence is full of ethical deceptions… the development of full [general] artificial intelligence could spell the end of the human race. ” – Stephen Hawking

Artificial Intelligence (AI) and Artificial General Intelligence (AGI) have their proponents and opponents. AI works on relatively simple, well-defined tasks while AGI acquires diverse knowledge and acts on it independently, the way humans do (see sidebar). While AI is a viable technology, AGI is more hypothetical, often considered the domain of science fiction writers. The question on many minds is when will AGI be a viable technology?

Not anytime soon according to the experts. Paul Allen, co-founder of Microsoft, says there is no way AGI will happen in our lifetimes. Demis Hassabis, co-founder and CEO of DeepMind, says AGI is by no means just around the corner.[1] Both Bill Gates and the late Stephen Hawking believed AGI would happen in our lifetimes, but it would be extremely dangerous. They feared, that if left unchecked, AGI could lead to no less than the eradication of humankind.[2]

Ethics and Responsible AI and AGI is a diverse, complex, and often misunderstood topic,” explains Elias Baltassis, BCG Partner and Director who led a recent conference on the ethical issues of AI and AGI. Joining Baltassis on the podium were three AI-AGI experts: Mike Begembe, author of Cracking the Data Code, Marc-Antoine Dilhac, AI chair MILA, and Hubert Etienne, philosopher and AI ethics researcher at Meta AI &ENS.

“Predicting technology advancements is not something we humans do very well,” explains Mike Begembe. “Eight years ago, experts were asked to predict when machines would beat a human being at the classic AI game, Go. They all predicted 10 years or more. It took only 2 years.”

Mark-Antoine Dilhac says AGI is more difficult to predict. “Even as more AI systems perform multiple algorithms that execute different tasks in different situations, AGI is still science fiction. And I’m not sure we will see AGI in the foreseeable future.”

AI, AGI, and Ethics

A main challenge of AGI is understanding how human intelligence works. Today’s theories are not unified. While the ethical issues are mainly about automating decision making, the real issues are relying on automation when human judgment and human responsibility are needed.

“If you think AGI will perfectly replicate everything that humans do, including thinking, reflecting, and emotions, that will not happen,” says Hubert Etienne. “The concept is more nefarious. Building a machine that makes better decisions than humans in every single field of decision-making opens up obvious ethical issues.”

Conference participants were asked to discern AI-related ethical issues, which were classified into three broad categories:

Issues stemming from human mistakes. Human mistakes are often the result of faulty algorithms or biased training datasets, which Baltassis calls, “artificial stupidity.” A classic example are facial-recognition systems that do not recognize non-white faces. “These are the most common sources of AI-related ethical issues and most frequently mentioned in the press and in books,” according to Baltassis. “Companies and organizations must eliminate these as soon as possible.”

 

Issues corresponding to ethical dilemmas. Ethical dilemmas are closely related to societal choices, explains Baltassis, which often revolve around one question: Do we lessen the well-being of part of the population in order to reduce discrimination throughout a society? A typical example is car insurance premiums for women: As women are generally better drivers than men, eliminating the “gender” criterion is said to lessen discrimination and increase societal well-being. Yet, without the gender criterion, insurance premiums will rise and thus the well-being of women drivers will fall.

 

Issues often unsolvable, even by humans. The self-driving car is a well-known example of an unsolvable problem. We have all heard the dilemma: A self-driving Tesla is driving along when suddenly 3 people are crossing the street in its path. The distance is too short to brake. Should the car turn the wheel, hit a lamp post, and kill its innocent passenger? Or continue on its path and possibly kill the people crossing the street? In a survey, most participants took a utilitarian approach, saying the car should turn the wheel as “better 1 dead than 3.” Yet, when the same experience is tested in car simulators, most drivers try to brake. Clearly, it is impossible in a few milliseconds to resolve this ethical dilemma. So how can we expect an algorithm to do so? And how can we expect it to do so without our instructions or our ethical/emotional/philosophical choices?

Participants generally agreed that the elements of human decision making, particularly emotions and ethics, must be part of the AGI equation. The whole psychology and philosophy of being people must be considered.

 

The Road to Ethical AI

Clearly there are numerous issues to discuss and decisions to make. So far, the world seems to be taking baby steps. In the last running count by BCG, 92 international organizations, including academia, governments, and companies, issued their versions of ethical principles of Artificial Intelligence. What was missing were the next steps—the policies and tools necessary to implement these principles. The European Commission issued a first draft of regulations last year. And, as with regulations on banking or data privacy, many expect EU regulations on AI to become the de facto standard in the world.

 

It’s a Human Decision

AGI may or may not materialize. The debate will continue among the AI and AGI communities for many years to come. What must materialize is a move beyond artificial stupidity and the poorly trained models that have become a hallmark of the conversation. According to Marc-Antoine Dilhac, the real problem is relying on technology in sensitive areas where human judgment and human responsibility are needed. “At the end of the day, it’s not a technology decision, it must be a human decision.”

 

Sidebar

Simple AI vs. Complex AGI

Artificial Intelligence, AI, is often defined as programs and systems that work on relatively simple tasks, such as estimating a person’s ability to perform a certain action. AI may also take on more complex and well-defined endeavors, such as driving a car or managing a factory. AI acquires capabilities typically associated with human activities, such as vision, hearing, and language.

Artificial General Intelligence, AGI, expands on AI to also include anthropomorphic characteristics, which ascribe human characteristics to non-human things. AGI allows machines to acquire very diverse knowledge and to learn to act upon this knowledge independently, the way we humans do. AGI systems will be able to think, comprehend, learn, converse, and problem-solve.

 

[1] AGI is nowhere close to being a reality, 17 Dec 2018

[2] Killer Robots? Lost Jobs? Slate, 28 April 2016