5 Experts on the real value of AI safety commitments

5 Experts on the real value of AI safety commitments

Welcome to the 10 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed.


DeepDive

Your weekly immersion in AI. 

We recently wrote about the new Frontier AI Safety Commitments signed by 16 major AI companies on the first day of the AI Seoul Summit. 

Since then, we’ve been asking experts from the global DeepFest community for their perspective on the real value of this particular agreement – and more importantly, of AI safety agreements more broadly. 

Here’s what they told us. 

Safety commitments signify movement in the right direction

Angela Kane (Member of Board of Directors, Partnership on AI) said that amidst intense discussion about regulating AI, views are divided on whether regulation should be voluntary, or imposed by governments.

“The Frontier AI Safety Commitment is voluntary,” Kane said, “and while it is a commendable contribution to setting standards for avoiding risk of tech misuse, it might also be an initiative anticipating more government control and trying to preempt it.” 

What is lacking is monitoring and oversight; the self-regulation means that we all have to trust the company complying with the standards they agreed upon. What is also lacking is more wide-spread adherence to the Commitment: how many more companies will join?”

That being said, Kane welcomes the Frontier AI Safety Commitments, “for guiding AI in a positive and beneficial direction, as policies can clearly foster responsible AI practices.” 

According to Dr. Roman Yampolskiy (AI Safety Researcher), “AI safety agreements signed by attendees at the AI Seoul Summit indicate a significant stride towards establishing global collaboration and governance standards in AI safety. The agreements forged by leading AI companies worldwide to adopt a uniform approach to AI safety can profoundly impact the field.”

“Primarily, such agreements are poised to enhance overall safety standards by ensuring that AI technologies are developed with robust risk assessment and management strategies, thereby mitigating potential harms. Moreover, by committing to transparent safety frameworks and governance structures, these companies are likely to boost public trust in AI technologies, which is crucial for their broader acceptance and integration into society.” 

“However, the initiative is not without its challenges and potential pitfalls,” Yampolskiy cautioned. “The effectiveness of these commitments heavily depends on the consistency and rigour with which they are implemented across different jurisdictions and companies.” 

There exists a risk that the rapid pace of AI technological advancement might outstrip the guidelines laid out in these agreements, rendering them obsolete if they are not regularly updated. Additionally, there is the concern of regulatory capture, where AI companies might influence the setting of safety norms to favour their operational freedoms, potentially at the cost of stringent safety measures.”

Rana Gujral (CEO at Behavioral Signals) said:

“The new safety commitments signed at the summit are a significant step forward in our collective effort to navigate the complexities of AI development responsibly. It's encouraging to see such a diverse group of leaders and companies come together to prioritise transparency, accountability, and collaboration. The introduction of the Frontier AI Safety Commitments, which include measures like publishing safety frameworks and pausing systems if risks can't be mitigated, shows a proactive stance that we desperately need.

But safety commitments are just one of many steps needed to ensure the safety of AI’s impact

When we spoke to Yonah Welker (Explorer, Public Evaluator, Board Member - European Commission Projects), he highlighted the wave of safety agreements that has swept across the world in the last 12 months – from the Bletchley Declaration in November 2023, to the first AI executive order in the US, the UK’s AI Safety Institute, and (in May 2024) the Council of Europe’s adoption of the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. Saudi Arabia has also established an international centre for AI research and ethics. 

“National approaches to safety come in parallel,” Welker said, “including a focus on high and unacceptable risk models, its oversight and regulatory sandboxes, effects on environment and sustainability, talent, capacity, research and infrastructure behind it. However, different governments are still at different stages of deploying this vision to reality; some already introducing AI Safety Institutes; some still aiming to do so. There is also a difference in how nations balance innovation and safety.”

According to Gujral, “The real test will be in the implementation and continuous enforcement of these agreements. It's crucial that these aren't just well-meaning guidelines but are backed by strong regulatory frameworks. Without concrete enforcement, there's a risk that these commitments might not fully address the more severe risks.”

“What stands out to me is the spirit of international collaboration and the push towards global governance for AI,” he added. “This is a promising sign that we're moving towards a more united approach, which is essential given the global nature of AI's impact. The focus on creating AI that is human-centric and trustworthy aligns perfectly with our goal of harnessing AI to tackle some of the world's biggest challenges, from climate change to healthcare.”

Yampolskiy agreed that in terms of international collaboration and global governance, “the commitments from the AI Seoul Summit mark a critical advancement. They serve not only as a platform for setting global standards but also as a conduit for sharing best practices and safety innovations across national and corporate borders. This kind of international cooperation is vital for crafting a cohesive global strategy to manage AI risks, especially given the technology's far-reaching implications.”

Not everyone is confident in the value of AI safety agreements so far 

Like Welker, Alvin W. Graylin (Author of Our Next Reality and Global VP, HTC) pointed out that there have been a number of AI safety-related agreements and guidelines published over the last year, “which is a clear sign there’s elevated awareness of the need for increased efforts in this area.” 

But none have clear enforcement clauses stipulated,” he said. “So signing such agreements is a nice PR ploy for these companies that may make some people and governments feel better, but they have little actual value to ensuring the safe development and deployment of this transformative technology.” 

“There's been very limited true cross border efforts to actually pool resources to create advanced AI systems, " Graylin went on, “rather, each company or country is competing to be the first to get to AGI. In such a race condition, the first victims to fall are caution and safety."

Gujral agreed that the adoption of such agreements isn’t enough on its own: “We must remain vigilant and ensure that these principles are not only adopted but also rigorously applied.” 

How could we do better? 

For Yampolskiy, the approach outlined in the new commitments do align with key principles that are essential for the responsible management of AI’s impacts – including transparency, accountability, and multinational co-operation. 

But he detailed ways in which this approach could be improved or enhanced. “It would be beneficial to establish mechanisms for dynamic updates to these commitments, ensuring they evolve in step with technological advancements,” he said; and noted that “the introduction of independent oversight could further bolster the framework's credibility, ensuring strict adherence to these commitments.” 

“Alternatives to further strengthen the approach could include the development of a global, legally binding AI safety framework under international bodies like the United Nations. Additionally, regular international summits could be utilised not just for updates and progress sharing but also for revising and refining AI safety commitments in light of new developments and challenges.

“Offering incentives for exceeding safety standards could also motivate companies to prioritise safety beyond the minimum requirements, fostering an environment of continuous improvement in AI safety practices.” 

Share your perspective 

AI safety affects everyone – so we want more people to join the conversation. Head to the comment section on LinkedIn and share your perspective.


Did you miss DeepFest 2024? Don’t worry – register now to secure your place at the 2025 edition. We can’t wait to see you there.

Related
articles

A co-creative process between AI and humans

Welcome to the 4 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed. DeepDive Your weekly immersion in AI.  With a background in biotech and

3 Exciting developments in AI x Robotics

Welcome to the 7 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed. DeepDive Your weekly immersion in AI.  Advancements in AI systems are driving

How people to relate to AI (and why companies should care)

Welcome to the 2 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed. DeepDive Your weekly immersion in AI.   A lot of the discourse around