The 12 styles problem: why AI struggles with true creativity

The 12 styles problem: why AI struggles with true creativity

Over the last couple of years, we’ve been dancing around the idea that AI trained on billions of images could unlock almost infinite machine creativity. Ask for a cyberpunk skyline, a surreal dreamscape or a medieval castle floating above a neon ocean and the model obliges instantly – serving you images you’ve never seen before. 

If a system has absorbed such an enormous archive of visual culture, surely it should be capable of recombining those ideas in endlessly original ways, shouldn’t it? 

But a recent experiment tested this by removing humans from the creative loop – and the results suggest that AI really does need human inventiveness to make art. 

The visual telephone experiment

In the experiment, two AI systems were placed into what researchers described as a ‘visual telephone’ loop: one model generated an image, while another described what it saw in text. 

That description then became the prompt for the next image, and the cycle repeated itself hundreds (eventually thousands) of times. In theory, this self-referential process should have produced a stream of increasingly strange or unexpected visuals, the way a whispered phrase mutates during a game of telephone.

But that’s not what happened. Instead, the system began drifting in a very particular direction.

Across thousands of iterations, the AI repeatedly converged on around a dozen visual motifs. Among them were: 

  • Rainy night cities illuminated by neon lights
  • Gothic cathedrals towering into dramatic skies
  • Windswept beaches
  • Solitary lighthouses
  • Softly lit luxury interiors 

The images were fine; some were even beautiful. But they were also very familiar. Observers have described the results as ‘visual elevator music’ – aesthetically pleasing and technically competent, yet oddly predictable.

Why AI gravitates toward clichés

The pattern reveals a fundamental reality in how generative AI actually works. Systems are trained on vast collections of image-text pairs gathered from across the internet, learning statistical relationships between words and visual patterns. 

When asked to generate an image, the model doesn’t invent an entirely new concept – it samples from the patterns it has seen before, and assembles elements that are most likely to satisfy the prompt.

And the internet, like human culture itself, has its habits.

Certain visual motifs appear repeatedly in online imagery: dramatic skylines at night, scenic coastlines, grand architectural landmarks. Because these images appear frequently in training datasets, they become statistically safe outputs for a model trying to produce something plausible. If you leave it without human direction, the system gradually gravitates toward these highly probable images – because they’re the centre of its visual gravity.

So the machine drifts towards the cultural average. 

The risk of creative convergence

This behaviour fits neatly with a growing body of research examining how generative AI affects human creativity as well. Studies exploring AI-assisted design processes have found that when participants see AI-generated images during brainstorming sessions, their own ideas often begin to cluster around those examples – a phenomenon known as design fixation

If we allow this to happen without human interruption and influence, then AI will gradually narrow the creative search space, nudging people toward variations of the same themes.

None of this means generative AI is incapable of creativity in practice. In real workflows, human prompts act as a steering mechanism – we introduce context, constraints and aesthetic judgement that guide the system toward more interesting territory. Designers push against the statistical tendencies of the model by asking for unusual combinations and refining outputs (and throwing out the outputs that feel too obvious). 

Creativity still needs a conductor (that’s you, fellow human)

The experiment shows that if we (as humans with human intentions) don’t shape the process, then generative systems tend to settle into the safest corners of their training data, and repeat familiar cultural patterns rather than inventing new ones.

The result isn’t chaos or radical originality. It’s a loop of cathedrals and beaches and glowing city streets – images that are kind of comforting, like a playlist designed to offend no one and surprise no one either.

Left to itself, the machine doesn’t begin a new art movement.

But with you in the loop – everything is possible. 

Join us at LEAP from 31 August – 3 September 2026 to hear directly from the people shaping the future of technology and why human creativity remains at the centre of it.

Related
articles

The day we stop calling them AI startups

As AI spreads across industries, the idea of an ‘AI startup’ may disappear. Here’s why infrastructure technologies change how startups differentiate.

The future of wearables is emotional

Researcher Dr. Beste Özcan explores emotionally intelligent wearables, transitional companions, and why the future of technology may depend on designing for relationships.