Predicting life’s first movements – one cell ahead

Predicting life’s first movements – one cell ahead

If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.


DeepDive 

Your weekly immersion in AI 

What could be possible if we could watch a living organism form – not as just a blur of biological activity, but as a system whose next moves can be anticipated in advance? 

That’s the challenge a team of researchers from MIT set out to tackle. They wanted to find out if a machine learning model could learn the rules of development well enough to predict how thousands of individual cells will behave as an embryo takes shape. 

And according to the resulting report, recently published in the journal Nature Methods, the answer is yes – ML can do that. 

They turned development into a prediction problem 

The study focuses on early development in the fruit fly (Drosophila melanogaster), which has long been used as a model organism in biology. In the first hour of development, a fruit fly embryo undergoes a rapid transformation as roughly 5,000 cells move, divide, detach from neighbours, and reorganise into distinct layers.

This stage (known as gastrulation) is both visually dramatic and scientifically difficult to model. Cells are densely packed, highly interactive, and constantly changing shape and position.

Traditionally, researchers analyse this process after the fact, observing how cells behaved once development has already occurred. But the MIT team asked a different question: can a model be trained to forecast what each cell will do next, and when?

A geometric approach to deep learning 

To do this, the researchers developed MultiCell, a geometric deep learning model designed specifically for multicellular systems.

Rather than treating cells as isolated points, MultiCell represents the embryo as a unified graph structure that captures both:

  • the spatial positions of cells, and
  • the physical contacts and junctions between neighbouring cells.

This approach allows the model to learn how local cell geometry and interactions influence future behaviour. As described in the Nature Methods paper, the model integrates these features across space and time, which enables it to reason about collective movement rather than isolated motion.

The model was trained using high-resolution, time-lapse microscopy videos of developing fruit fly embryos, with each cell tracked individually across the first hour of development.

So what can the model predict?

The researchers found that MultiCell can predict (minute by minute) whether individual cells will: 

  • move or shift position
  • divide
  • detach from neighbouring cells
  • participate in folding events that shape the embryo

And crucially, the model predicts not just what happens, but when it happens. 

When evaluated on embryo data not used during training, the model achieved around 90% accuracy in forecasting these cell-level behaviours. Neural activation analyses and ablation studies reported in the paper show that cell geometry and junction networks are essential features driving this predictive performance.

So the model isn’t simply memorising trajectories; it’s learning meaningful structural cues embedded in the developing tissue. 

OK…but why is this important? 

Because it demonstrates that deep learning can be used to forecast multicellular development at single-cell resolution, rather than just describing it retrospectively.

The researchers emphasise that MultiCell is a proof of concept. While the current study focuses exclusively on fruit fly embryos, the MIT team notes that the same modelling framework could, in principle, be applied to other organisms – if similarly detailed imaging data are available.

That caveat is critical. As the researchers point out, the main limitation is not the model itself, but the scarcity of high-quality, time-resolved cellular data for many biological systems.

Still, the implication is that AI is beginning to move from observing biological processes to anticipating them. Which is big news for science and health. 

What to watch next…

  • Whether similar geometric models can be applied to more complex organisms
  • How improvements in biological imaging unlock richer training data
  • How predictive models like this reshape the way scientists study development over time

For now, MultiCell offers a promising glimpse into what happens when intelligent tech learns processes, instead of just patterns. 

Stay curious – and we’ll see you back in your inbox next week.

Related
articles