Welcome to the 43 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed.
DeepDive
Your weekly immersion in AI.
On May 21st, it was announced that Google, Microsoft, Meta, Open AI and Amazon are among a total of 16 AI companies that have committed to a set of safety outcomes for their AI systems.
As reported by The Independent, the agreement was revealed on the opening day of the AI Seoul Summit – where major players in the AI space confirmed that they will each publish safety frameworks to explain how they’ll measure the risks of their AI models, under the new Frontier AI Safety Commitments.
It’s being hailed as a historic moment in the journey towards international standards for AI governance. And we want to know what you think about it.
What risks?
The full extent of risks expected to be measured until the commitments isn’t yet clear, but will include examining the risk of misuse of AI tech by malicious actors; and detailing when severe risks would be ‘deemed intolerable’ – as well as what the companies will do to make sure they stay within a reasonable level of risk.
In cases of extreme risk, the companies have committed to ‘not develop or deploy a model or system at all’ if they’re unable to keep risks below the agreed thresholds. And those thresholds will be defined over the coming months, with the companies seeking input from trusted organisations and experts, including their home governments.
A global movement
The 16 companies involved represent the most pioneering AI organisations around the globe – including the US, China, and the Middle East.
Ben Garfinkel (Director at the Centre for the Governance of AI) said in a statement:
“These commitments represent a crucial and historic step forward for international AI governance. My expectation is that they will speed up the creation of shared standards for responsible AI development, help the public to judge whether individual companies are doing enough for safety, and support informed policy making around the world.”
And Anna Makanju (VP of Global Affairs at OpenAI) said:
“The Frontier AI Safety Commitments represent an important step toward promoting broader implementation of safety practices for advanced AI systems, like the Preparedness Framework OpenAI adopted last year.
“The field of AI safety is quickly evolving and we are particularly glad to endorse the commitments’ emphasis on refining approaches alongside the science. We remain committed to collaborating with other research labs, companies, and governments to ensure AI is safe and benefits all of humanity.”
Join the conversation
We want to know what you think. What do these safety commitments mean for the development of global governance for AI, and how much of an impact will they have? Head to this newsletter’s comment section on LinkedIn and share your opinion.
Did you miss DeepFest 2024? Don’t worry – register now to secure your place at the 2025 edition. We can’t wait to see you there