Is data poisoning a major threat to AI?

Is data poisoning a major threat to AI?

Welcome to the 25 new deep divers who have joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed.

----------------------

DeepDive

Your weekly immersion in AI. 

In October 2023, MIT’s Technology Review published this article about a new data poisoning tool called Nightshade. 

The tool disrupts AI training data, and it’s specifically for artists – enabling them to add invisible changes to the pixel data in their artwork before they put it online. This means that if that poisoned art is scraped into a data training set for an AI model, it can cause chaos – the AI will behave unpredictably as it’s using the invisible data as well as the visible. 

The emergence of Nightshade comes in response to growing unease within creative professions about intellectual property and copyright – and a wave of lawsuits against AI developers from artists who argue that their work has been appropriated without permission or compensation. 

Ben Zhao (Professor of Computer Science at the University of Chicago) headed the team behind Nightshade, and he told Technology Review that the goal is to put the power back in artists’ hands – “by creating a powerful deterrent” against the non-consensual use of artwork. 

Data poisoning is more often associated with threat actors than artists

Data poisoning is more commonly perpetrated by cyber threat actors rather than artists trying to protect their work. 

The first data poisoning attack happened over 15 years ago. But as more and more organisations rely on generative AI tools to produce information and materials, gen-AI tools are becoming part of the attack surface that cyber criminals can work on – and data poisoning attacks are expected to become more prevalent. 

According to a paper published in IEEE Xplore, data poisoning is the most critical vulnerability in AI and machine learning – and particularly worrying because most organisations are “not equipped with tactical and strategic tools to protect, detect and respond to attacks on their Machine Learning (ML) systems.” 

There are four broad types of data poisoning attack

2022 paper published in the IEEE Computer Society outlined four types of data poisoning attack: 

  1. Availability attacks This is when the entire model is corrupted by the attack. It results in misclassified test samples, false positives and negatives, and label flipping which renders the model inaccurate and ineffective.
  2. Backdoor attacks In this type of data poisoning attack, a threat actor adds a backdoor into a data training set which makes the model misclassify the data – resulting in poor quality output.
  3. Targeted attacks This type of attack selects just a small number of data samples to corrupt, and allows the AI to continue working normally with the other samples. This makes the attack hard to detect – because the impact is less visible and obvious.
  4. Subpopulation attacks Like targeted attacks, these only affect specific data samples – but they also corrupt other subsets with similar features to the targeted samples, while letting the rest of the model continue to function accurately. 

It poses a real threat to the stability of AI models

We know that a major drawback of AI is that its efficacy and accuracy is directly linked to the quality of the data it’s trained on. High quality data = good results. Poor quality data leads to poor quality output. 

And data poisoning allows adversaries to explicitly manipulate this weakness – because even the most advanced machine learning models can be rendered useless (or dangerous) by bad data.

For example, an experiment called ImageNet Roulette used pictures uploaded and labelled by users in order to learn how to classify new images. Relatively quickly, the AI model started labelling images with racist language and gender slurs. When something like individual users writing with negative or biassed language on the internet can affect AI learning, it’s clear that threat actors have a lot of scope to deliberately manipulate – and corrupt – AI. 

Data poisoning can be used to create deepfakes, create malicious chatbots, disrupt an organisation’s economic and reputational power, and much more. 

And Nightshade shows that as more people understand the disruptive potential of data poisoning, populations that simply aren’t happy with the impact AI is having on their lives and work could use it as a strategy to fight back against AI tech developers. 

Developers will need to continue working to create robust strategies for safeguarding their AI systems against data manipulation, and deploy countermeasures that can detect and fix poisoned data. 

And Nightshade is a reminder to keep listening to digital users, too: AI acceptance is key to ensuring that users equip AI with high quality data, and enable the best possible output in the future. 


If you enjoyed this content and want to learn more about the latest in AI, subscribe to our YouTube channel, where we upload new videos every week featuring leading AI industry experts like Pascal Bornet (Chief Data Officer, Aera Technology), Cassie Kozyrkov (Chief Decision Scientist, Google), Betsy Greytok (Vice President, Ethics amp; Policy, IBM) and more at #DeepFest23. You can also register for DeepFest 2024.

Related
articles

Will you write the next AI novel?

Welcome to the 118 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed. DeepDive Your weekly immersion in AI.   Since ChatGPT was released in November

Do all tech leaders do this?

Welcome to the 1,372 new techies who have joined us. If you haven’t already, subscribe and join our community in receiving weekly tech insights, updates, and interviews with industry experts straight to your inbox. This week we’re quoting Oscar Barranco Liébana (Integrated Operations Platform Director, FIFA World

AI startups: The difference between good and bad ideas

Kablan’s experience building and exiting successful startups serves as inspiration to upcoming AI entrepreneurs. We asked him what led him to AI, and why he’s focusing on the intersection of AI and blockchain in 2024.