Everyone is getting in on the artificial intelligence (AI) bandwagon. But as AI’s uses grow, the hype around its ethical implications is taking center stage as well | 15 SEPTEMBER 2021
|
One of AIs most discussed ethical dilemmas involves self-driving cars and pedestrians, but with this also possible bad choices and outcomes. Keep going and possibly hit the pedestrians, brake too hard and hurt the driver, turn left and hit the child, turn right and hit a senior citizen.
One needs to wonder, however, whether we expect AI to address ethical issues that humans cannot address; and the self-driving car is a clear example of that. Some might argue that properly functioning AI, armed with robust definitions of the choices available, would anticipate the situation and prevent itself from being in it in the first place. But this seems rather unrealistic for now.
|
|
Responsible AI (RAI) and ethical uses of AI are often in the news, however, to put things
into perspective, we need to keep in mind that the vast majority of AI’s current use cases lies with industrial processes such as the optimization of supply chains, manufacturing processes, retail assortments, pricing, etc. which don’t lend themselves grandstanding ethical dilemmas.
AI’s use case in industrial process optimization has played a vital role in helping us combat the global pandemic and reach the stage of relative immunization that we are at today. It has been instrumental in shortening the amount of time taken during
Stage 1 of the vaccine testing process, one of the most time-consuming stages in an arduous journey toward approval.
However, as long as decisions are being made, responsibility must be taken. And, in the case of the current pandemic, at some point, important questions need to be asked (e.g., should we allow AI to conduct the contact tracing necessary to control the spread of the virus?).
Given that our phones ‘know’ all about our activity and who we have been in contact with over time, AI could theoretically make quicker decisions on the probable paths of transmission and on how to identify clusters of the virus faster than healthcare services can react. For now, governments have only allowed a few specific applications to do so. In effect, governments seem to be choosing to exchange a higher rate of infection to preserve the privacy of our data and to avoid the public reaction that may result from a fully automated, AI-driven tracing system.
More generally, smarter systems working at hyper speed with no latency and with embedded feedback loops will only become smarter and more adaptive. And that’s where the promise of AI lies. It should allow us to do better what we already do best.
However, this groundbreaking technology does not absolve executives of the need for responsibility.
Responsible AI is a widely debated topic. But there is consensus that it involves monitoring the technology when it’s operating in the field. BCG research has shown that most AI players (international organizations, public authorities, companies) have limited themselves to defining high-level principles and did not go into developing processes and tools necessary for operationalizing Responsible AI. The same research has shown that even organizations, that have acted upon Responsible AI, largely overestimate their level of advancement.
RAI involves incorporating widely accepted standards of right and wrong into the models. These have concrete consequences because we need to ensure that the AI systems that will eventually be used are unbiased, fair, explainable, safe, and have appropriate governance to safeguard privacy, minimize environmental impact, and augment humans not replace them. It’s a long list, but in a world where responsibility matters, definitions matter.
These were some of the points that we had discussed together with several experts at the LEAP webinar on ‘Fact or Fiction: The Myth of Responsible AI in the Era of Pandemic Panic’ led by experts from BCG, McAfee, Huawei, and Raiven Capital