Making AI as irrational as humans

Making AI as irrational as humans

Welcome to the 15 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed.


DeepDive

Your weekly immersion in AI. 

Humans are irrational. 

AI (most of the time) is not. 

And this can make it difficult for AI and humans to work together collaboratively; because AI models struggle to predict irrational decision-making on the part of their human colleagues.

Researchers at MIT and the University of Washington are working on a solution to this – creating a model that can take potential irrational choices into account. 

How can an AI model understand irrationality? 

If you think about human decision-making from a computational perspective, the researchers suggest, then any irrational decisions can be boiled down to limited computing power. We’d need to spend decades thinking through every issue in order to come up with the most rational decision, and we can’t do that. So we have to wing it and do the best we can with the information we have available to us at any given moment. 

In contrast, AI can sort through high volumes of data very quickly in order to land on a rational response. 

So the model in development is designed to account for unknown restraints in computational power that might change the outcome of a decision-making process. 

Inferring a human’s moves based on routes they’ve taken before

The research paper demonstrates how their approach could result in a model that can facilitate multi-agent decision-making between human and AI – by predicting the moves that a player will make in a chess match, based on the decisions they’ve made before. 

In particular, they observed that players spent less time thinking about simple moves than more complex ones, and that stronger chess players were more likely to spend more time planning moves than weaker players. So the depth of planning is, as Jacob put it, “a really good proxy of how humans behave.”

But this isn’t about chess. The implications are much bigger: a model like this could enable scientists to teach AI systems about the complexities of human behaviour, and that could help AI respond more effectively to human collaborators. 

Why is this important?

The ability to infer a human’s goals from an understanding of their behaviour holds the potential to make AI assistance much more useful. 

Athul Paul Jacob (Electrical Engineering and Computer Science graduate student, and lead author of the paper, said in a statement:

“If we know that a human is about to make a mistake, having seen how they have behaved before, the AI agent could step in and offer a better way to do it. Or the agent could adapt to the weaknesses that its human collaborators have. Being able to model human behaviour is an important step toward building an AI agent that can actually help that human.”

But doesn’t working towards suboptimal decision making seem…irrational?

No. We’re not asking AI to make poor decisions – instead, it’s about enabling AI to understand and predict human behaviour, so that it can support the way its human collaborators work. 

It’s not just about creating more intelligent AI – it’s about developing AI that can work more seamlessly with people.


Did you miss DeepFest 2024? Don’t worry – register now to secure your place at the 2025 edition. We can’t wait to see you there

Related
articles

Developments in AI for the internet of things

Edge AI is driving important shifts in the Internet of Things (IoT) landscape by integrating artificial intelligence tech directly into IoT devices (also known as edge devices).

How will partnerships drive the future of AI?

Welcome to the 8 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed. DeepDive Your weekly immersion in AI.  You’ve heard it, we’ve

5 Experts on the real value of AI safety commitments

Welcome to the 10 new deep divers who joined us since last Wednesday. If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed. DeepDive Your weekly immersion in AI.  We recently wrote about the new