If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.
If you were trying to cross a busy foreign city using only a phrasebook, you’d be able to ask for directions and you’d recognise people’s hand gestures. But you wouldn’t understand the streets – how they connect and the directions of traffic flow, or what might happen if you took a wrong turn.
That’s how much of today’s AI works. It’s impressive and fast, but it’s operating without a grounded understanding of the world it describes.
Now, a decades-old idea is making a comeback to change that: world models.
At their core, world models are exactly what they sound like – internal representations of how the world works. Instead of simply predicting the next word or image, these systems aim to simulate reality itself: how objects move, how environments change, and what happens when actions are taken.
As this article in Quanta Magazine puts it, a world model acts like a kind of “computational snow globe” – a contained simulation where an AI system can test ideas and refine decisions before acting in the real world.
Compare that with today’s LLMs – they excel at recognising patterns in vast datasets, generating text or images that look right. But they often struggle with basic physical reasoning or causal logic.
World models aim to cover that missing ground, by giving AI an internal simulation of reality – a context much closer to what humans rely on to navigate life.
There’s a bit of buzz around world models right now, but they’re not new. The concept has roots in early artificial intelligence research and became more formalised in reinforcement learning, where agents learn to navigate environments by building internal representations and predicting outcomes.
In simpler settings (like games or controlled simulations) this approach has been used for years. But there was always a limitation: the real world is far messier than a game environment.
Capturing the complexity of physical reality (with its uncertainty and scale and nuance) proved difficult. As a result, world models remained largely confined to narrower domains like robotics or simulated environments.
The idea itself isn’t suddenly changing now – but the context around it is.
Several converging trends are bringing world models back into focus.
First, AI systems are no longer limited to text. Modern models can process images, video, and other sensory data, making it possible to build richer representations of the world – a shift highlighted in recent coverage by Scientific American.
Second, as that Quanta article pointed out, the limitations of current approaches are becoming clearer. While LLMs have driven incredible progress, they still struggle with causality, long-term planning, and consistent reasoning about the physical world.
This has prompted leading researchers to explore alternative paths. Former VP and Chief AI Scientist at Meta, Yann LeCun, has been particularly vocal about the need for AI systems that can model the world in order to plan and act effectively. In December 2025 he confirmed he had launched a new world model startup.
And at the same time, researchers like Fei-Fei Li are emphasising the importance of spatial intelligence (the ability for AI to understand three-dimensional environments, objects, and relationships) as a key step towards more capable systems.
All of this suggests a broader recognition that predicting language is not the same as understanding the world.
If successful, world models could change how AI interacts with reality – making that interaction physical as well as digital:
There’s a reasonable question to ask here: if this idea has been around for decades, why hasn’t it already transformed AI?
Because modelling the real world is extremely difficult.
Reality is unpredictable and filled with edge cases. Building systems that can capture not just correlations, but true causal relationships is a huge challenge.
Even defining what it means for an AI system to ‘understand’ something is still an open question.
So we might be building systems that act as if they understand the world long before we can prove that they do.
Improvements in world models aren’t necessarily intended to replace LLMs. But they’re a potential evolution – a step that extends the capabilities of LLMs, with AI models that can learn from the structure and dynamics of the world itself.
AI is moving on from just learning to speak, and beginning to learn how the world works.
What impact do you think world models could have over the next five years? Open this newsletter on LinkedIn and tell us in the comments.
We’ll see you back here next week.