Welcome to the 42 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed.
DeepDive
Your weekly immersion in AI.
People are funny. We make jokes, we understand jokes, and we laugh at jokes – most of us rely on humour (at least to some extent) to overcome the seriousness of life, and to connect with others.
But will AI, known for taking things very literally, ever be able to ‘get’ the joke?
Well, yes.
For a few years now, researchers at MIT have been working on AI models that can detect sarcasm and satire in human-written text.
In 2017 it was reported that an MIT-made system could detect sarcasm in tweets more effectively than humans – helping to fine-tune AI’s capacity to detect and remove hate speech from social media platforms.
In 2019, MIT’s Technology Review detailed how computer scientists had crowd-sourced the task of transforming satirical sentences from The Onion into serious ones. The goal was to create a database of sentences that are categorised as either ‘funny’ or ‘not funny’ – to enable machine language learning models to learn from these opposing data sets, and identify the characteristics of genuine vs satirical writing.
They did this by offering out an online game called unfun.me, in which players were given a satirical headline and asked to rewrite it, changing as few words as possible – with the aim of tricking other players into believing the headline was genuine. Players were then also asked to rank each headline depending on how funny they thought it was.
And AI’s satire-detection capabilities are improving
A 2023 study by Juliann Zhou (Researcher at New York University) used machine learning language models to analyse a collection of written posts from the social media discussion platform Reddit. Zhou found that certain language models, including CASCADE and BERT demonstrated higher precision when it came to interpreting ‘contextualised language’ – detecting an underlying sarcastic tone in written comments.
And now, MIT researchers are working on a model that can distinguish between disinformation and social commentary – so it can tell whether an article, for example, is a piece of satirical cultural criticism or a piece of fake news.
Why does it matter if AI gets the joke?
There are lots of reasons why it matters. If AI can tell the difference between satirical humour and seriousness, it could:
- Be used to analyse and label fake news online in the future, so readers can easily see when a piece of information isn’t verified as genuine.
- Help to identify hate speech in the form of irony or satire in social media posts.
- Make AI-generated content more relatable and readable for humans – because AI models will be able to communicate meaning by saying something completely different; which is an important skill in authentic human communication.
Nuance is key to the way people talk and write to one another. If AI is going to be a valuable partner for humans in the realm of digital communication, it’s got to have the capacity to identify and categorise nuance – and to not take everyone at their word.
Join the conversation
Why is it important for AI models to recognise the features of humour and satire? Head to the comment section on LinkedIn and tell us what you think.
Did you miss DeepFest 2024? Don’t worry – register now to secure your place at the 2025 edition. We can’t wait to see you there