A landmark release: Global AI security guidelines

A landmark release: Global AI security guidelines

Welcome to the 32 new deep divers who have joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed.

----------------------

DeepDive

Your weekly immersion in AI. 

Just a couple of weeks ago, we wrote about what we can expect from AI regulations in 2024. Then, on 27 November 2023, the UK’s National Cyber Security Centre (NCSC) released a landmark set of global guidelines for AI security. 

It’s worth emphasising that these are guidelines, and not legislated or enforceable. But they’re a significant step towards a global consensus on safe and secure practices within AI development. 

How were the guidelines developed? 

The Guidelines for Secure AI System Development were developed in partnership with the US Cybersecurity and Infrastructure Security Agency (CISA). And 21 other international agencies helped, too.

They mark a pivotal moment in the journey towards establishing global standards and collaboration for AI security – and the response within the AI and cybersecurity industries has been largely positive. 

Which countries have endorsed them? 

So far, 18 government agencies have endorsed the guidelines. 

As listed by the NCSC, they are:

  • Australian Signals Directorate’s Australian Cyber Security Centre (ACSC)
  • Canadian Centre for Cyber Security (CCCS) 
  • Chile’s Government CSIRT
  • Czechia’s National Cyber and Information Security Agency (NUKIB)
  • Information System Authority of Estonia (RIA) and National Cyber Security Centre of Estonia (NCSC-EE)
  • French Cybersecurity Agency (ANSSI)
  • Germany’s Federal Office for Information Security (BSI)
  • Israeli National Cyber Directorate (INCD)
  • Italian National Cybersecurity Agency (ACN)
  • Japan’s National Center of Incident Readiness and Strategy for Cybersecurity (NISC; Japan’s Secretariat of Science, Technology and Innovation Policy, Cabinet Office
  • New Zealand National Cyber Security Centre
  • Nigeria’s National Information Technology Development Agency (NITDA)
  • Norwegian National Cyber Security Centre (NCSC-NO)
  • Poland’s NASK National Research Institute (NASK)
  • Republic of Korea National Intelligence Service (NIS)
  • Cyber Security Agency of Singapore (CSA)
  • UK National Cyber Security Centre (NCSC)
  • United States of America’s Cybersecurity and Infrastructure Agency (CISA); National Security Agency (NSA; Federal Bureau of Investigations (FBI)

The guidelines are organised into four key categories

The categories are secure design, secure development, secure deployment, and secure operation and maintenance. 

They’re aimed both at AI developers that are creating new systems and tools, and at companies that are building services and AI systems on top of tools provided by third parties. 

Lindy Cameron (CEO at NCSC) said in a statement: 

“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”

And major tech companies have signed their approval of the guidelines, too

In the tech industry, signatories include Google, OpenAI, Microsoft, and Amazon. 

It’s a positive step towards AI systems that are secure by design. And that’s important – because it’s much harder to secure a system after it’s been built than to build design into its foundations from the start. 

What’s your perspective? 

Open up this newsletter on LinkedIn and share your thoughts on the new guidelines. Could they become a globally accepted standard – and what do they tell us about the potential for global collaboration on AI development in the future?


If you enjoyed this content and want to learn more about the latest in AI, subscribe to our YouTube channel, where we upload new videos every week featuring leading AI industry experts like Pascal Bornet (Chief Data Officer, Aera Technology), Cassie Kozyrkov (Chief Decision Scientist, Google), Betsy Greytok (Vice President, Ethics amp; Policy, IBM) and more at #DeepFest23.

We're counting down to DeepFest 2024. Dive into the world of AI and cutting-edge innovations in Riyadh, Saudi Arabia. Register for DeepFest 2024.

Related
articles

What if your market doesn’t really exist yet?

Welcome to the 2,442 new techies who have joined us since last Friday. If you haven’t already, subscribe and join our community in receiving weekly tech insights, updates, and interviews with industry experts straight to your inbox. This week we’re quoting Fabien ‘Neo’ Devide (Co-Founder and CEO

Is AI humanity’s answer to immortality?

Welcome to the 1,611 new techies who have joined us since last Friday. If you haven’t already, subscribe and join our community in receiving weekly tech insights, updates, and interviews with industry experts straight to your inbox. This week we’re quoting Noah Davidsohn (Chief Scientific Officer and

Why do Chief AI Officers matter?

We caught up with Dobrin before he heads to Riyadh for DeepFest 2024. We wanted to find out how the role of Chief AI Officer has developed over the last year – and what it means to be a strategic AI leader in a rapidly changing landscape for tech and business.