AI governance: What’s coming in 2024?

AI governance: What’s coming in 2024?

Welcome to the 27 new deep divers who have joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed.

----------------------

DeepDive

Your weekly immersion in AI. 

If there’s one question that defines 2023, it’s this: 

How will AI be regulated in the future? 

And 2024 will be a big year for the creation of policy around the development, use, and distribution of AI tools. 

AI governance has to evolve in order to address ethical, legal, and societal concerns about AI. But this year has seen international confusion over what is and is not appropriate from a regulatory standpoint – much of that confusion stemming from the reality that no one’s quite sure how much AI will change our lives – or how quickly it’ll happen. 

In October 2023, for example, UK Prime Minister Rishi Sunak said he’s in no rush to regulate AI, while he simultaneously announced the creation of an open-source AI safety body that will evaluate and test emerging technologies. 

What regulation currently governs the use of AI? 

While some countries and regions are urging for a global consensus on AI regulation, many have already established initial regulatory frameworks to manage the technology. 

They include…

  • Saudi Arabia: A new Intellectual Property Law has been proposed that includes a chapter dedicated to IP associated with AI and emerging technologies. The Saudi Data and Artificial Intelligence Authority (SDAIA) has also published its AI Ethics Principles version 2.0 – detailing seven key principles that govern the use and development of AI in Saudi Arabia; including fairness, privacy and security, humanity, and social and environmental well-being.
  • US Federal AI Governance: The US is developing AI governance policies at the federal level – but the focus so far has been on understanding how the nation’s existing laws apply to AI tech. The Biden Administration and the National Institute of Standards and Technology (NIST) have published guidance for the safe use of AI.
  • The European Union: The EU has launched the Artificial Intelligence Act, among the most extensive regulation frameworks for AI in the world right now. It aims to classify AI tools according to risk level – from ‘low’ to ‘unacceptable’. Any AI tools classified as high risk have to be approved before going to market – and the approach focuses on regulating specific use cases for AI, rather than regulating AI tech overall.
  • China: Shanghai became the first province to pass a law on the development of AI in the private sector, and the country has federal regulations that govern AI.
  • Brazil: The nation has a robust National Strategy for Artificial Intelligence, and a draft AI law which outlines user rights regarding their interactions with AI systems, and (like the EU framework) includes guidelines for classifying different types of AI tooling based on their perceived risk to society.
  • The UAE: The government has launched a dedicated AI strategy and appointed a Minister of State for AI. 

AI regulation will continue to develop in 2024

Globally, efforts to understand the potential impacts of AI and to regulate it appropriately are picking up pace. 

According to Stanford University’s 2023 AI Index, 37 AI-related bills were passed into law around the world in 2022 – and we can expect an uptick in the rate of laws passed over the coming years. 

In July 2023, OpenAI, Microsoft, Meta, and Google (along with other major tech companies) signed an agreement with the White House, promising to invest in more responsible AI. And then a number of the companies involved formed a new industry coalition, the Frontier Model Forum, which aims to “promote the safe and responsible use of frontier AI systems. 

AI governance is on everyone’s minds. Governments are under increasing pressure to deliver clear guidance for their citizens. 

And as countries and regions around the world work to develop regulatory frameworks to ensure the safe, responsible, and ethical use of AI, a growing number of regulations will affect how AI developers and AI-powered enterprises can work. 

Want to stay ahead of the curve? Join us at DeepFest 2024


If you enjoyed this content and want to learn more about the latest in AI, subscribe to our YouTube channel, where we upload new videos every week featuring leading AI industry experts like Pascal Bornet (Chief Data Officer, Aera Technology), Cassie Kozyrkov (Chief Decision Scientist, Google), Betsy Greytok (Vice President, Ethics amp; Policy, IBM) and more at #DeepFest23. You can also register for DeepFest 2024.

Related
articles

Is AI humanity’s answer to immortality?

Welcome to the 1,611 new techies who have joined us since last Friday. If you haven’t already, subscribe and join our community in receiving weekly tech insights, updates, and interviews with industry experts straight to your inbox. This week we’re quoting Noah Davidsohn (Chief Scientific Officer and

Why do Chief AI Officers matter?

We caught up with Dobrin before he heads to Riyadh for DeepFest 2024. We wanted to find out how the role of Chief AI Officer has developed over the last year – and what it means to be a strategic AI leader in a rapidly changing landscape for tech and business.

Should autonomous vehicles be labelled?

Welcome to the 1,013 new techies who have joined us since last Friday. If you haven’t already, subscribe and join our community in receiving weekly tech insights, updates, and interviews with industry experts straight to your inbox. This week we’re quoting Helen Pan (General Manager at Baidu’