Welcome to the 36 new deep divers who have joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed.
Your weekly immersion in AI.
We’re very excited to be on your feed this week, delivering an interview with Elizabeth Adams (Affiliate Fellow at the Stanford Institute for Human-Centered AI, and Former Chief AI Ethics Advisor).
A scholar-practitioner and advisor to the leaders of organisations, Elizabeth is deeply involved in creating a more equitable and inclusive future in the AI space – and by extension, the wider world.
Dive into this exclusive glimpse into Elizabeth’s perspective on AI, ethics, and the urgent need for inclusive global conversations.
Could you share your career journey so far?
“My professional journey spans over two decades, predominantly in technology leadership, where I've consistently found fulfilment in guiding large teams and driving initiatives. My leadership philosophy is grounded in principles of equity, advocating for fairness in roles, titles, pay, and positions; coupled with a strong belief in the enriching power of collaboration and diverse perspectives.
“A significant turning point occurred when I transitioned to AI Ethics, prompted by the realisation that, despite the widespread acclaim of AI as the next tech revolution, its benefits weren't universally accessible. This realisation fueled my curiosity, leading me to delve into the profound consequences of AI – particularly distinguishing between AI bias and AI harm.
“In differentiating between AI bias and AI harm, I recognised that AI bias involves the presence of systematic and unfair favouritism within an AI system, often stemming from biased training data. On the other hand, AI harm encompasses the tangible and intangible consequences that result from biased AI outcomes, extending to real-world impacts on individuals and communities.
"Active involvement as a community leader in Minneapolis marked another crucial chapter. Here, I initiated a civic tech project aimed at fostering shared decision-making regarding AI-enabled technologies. This hands-on experience inspired the creation of my advisory firm. In this capacity, I guide leaders in adopting and operationalizing Responsible AI, aligning technological advancements with ethical considerations.
“Simultaneously, I'm pursuing a doctorate with a specialised focus on the Leadership of Responsible AI. This academic pursuit allows me to derive joy from extensive research, listening to the narratives of leaders and employees, and championing those who are often overlooked in the lifecycle of AI design and development.”
What does Responsible AI mean to you?
“Great question. Responsible AI, for me, involves the ethical and accountable development, deployment, and use of AI systems. My exploration, framed through my Leadership of Responsible AI conceptual model, emphasises broad employee stakeholder engagement.
“This involves creating artefacts like policies, procedures, guidelines, and frameworks that shape AI design with Responsible AI tenets such as…
…through a collaborative process. The essence lies in incorporating these considerations throughout the AI lifecycle. My research aims to understand how employee engagement and broader representation can enhance Responsible AI, with a focus on human values, fundamental rights, and minimising biases and unintended consequences.”
In terms of ethical considerations and practices, can you identify anything that's missing in the way AI is currently being developed and adopted? Or to put it a different way, what could developers and business leaders do better?
“I could dedicate an entire day to this topic as it forms the core of my research. Beyond the principles I've mentioned earlier, my research has brought forth three additional considerations.
“Firstly, there's the matter of consent; participants suggest that using data without explicit consent is unethical, leading to the contemplation of regulations to address this concern.
“Second, there's a growing emphasis on attribution; participants express a desire to credit the source, especially when using LLMs to modify policies that become official documents.
“The third consideration revolves around protecting intellectual property. Employees, unfamiliar with Responsible AI in their organisational culture, draw on their personal experience with AI outside of work to establish ethical practices. They seek safeguards for their content used in AI models, applying personal lessons to guide ethical leadership in the absence of a formal Responsible AI policy.”
Do you think a future where AI has a positive impact on most people's lives is possible?
“Achieving a future where AI positively impacts most people's lives hinges on three critical factors.
“Firstly, the design of AI should engage large groups that represent diverse cultures, communities, expertise, and lived experiences.
“Secondly, organisations need to instil responsiveness into their culture, enabling swift modifications and modernization of systems in the face of identified errors or harm.
“Thirdly, democratising AI use is paramount, ensuring users around the world have a safe space to share findings that may be biased, cause harm, or lead to exclusion or discrimination.”
Finally, why are events like DeepFest valuable to you?
“Engaging with influential world leaders who shape perspectives on Responsible AI within their respective domains is of great value to me. Events like DeepFest play a crucial role in uniting these influential figures.
“The development of AI cannot happen in isolation, and discussions about its challenges must also be inclusive. DeepFest provides a platform for meaningful interactions with leaders, encouraging a collective effort to make Responsible AI the standard rather than a peripheral concern.
“In today's landscape, with the urgent need for ethical AI practices, these interactions contribute to addressing what can be considered an emergency, especially for those who experience AI-related harm. It goes beyond mere discussion; it's a call for actionable solutions. DeepFest serves as a catalyst for a global conversation that transitions into intentional action.”
Want to learn more from Elizabeth?
Register now to attend DeepFest 2024.
If you enjoyed this content and want to learn more about the latest in AI, subscribe to our YouTube channel, where we upload new videos every week featuring leading AI industry experts like Pascal Bornet (Chief Data Officer, Aera Technology), Cassie Kozyrkov (Chief Decision Scientist, Google), Betsy Greytok (Vice President, Ethics amp; Policy, IBM) and more at #DeepFest23. You can also register for DeepFest 2024.