Reasoning first: Why we’re at a turning point for thinking AI

Reasoning first: Why we’re at a turning point for thinking AI

If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.


DeepDive 

Your weekly immersion in AI 

We’ve taught machines to talk, and now we’re teaching them to think. Not in the sci-fi sense of consciousness, but in a practical one: organisations are increasingly seeking systems that can follow a chain of logic, adjust their strategy, and explain how they got there.

This shift is evident in the latest data and research; from the State of AI Report 2025, to new work on hybrid and neuro-symbolic models. It all points to the same conclusion – we’re entering a reasoning-first era for AI. 

Reasoning defined the year 

According to the State of AI Report, “reasoning defined the year,” as frontier labs used reinforcement learning, rubric-based rewards and verifiable reasoning to build models that can plan, reflect, self-correct, and work over longer horizons.

The same report introduces its inaugural AI Practitioner Survey – the largest open-access survey of over 1,200 AI practitioners, focused on how people actually use AI. It finds that 95% of professionals now use AI at work or at home, 76% pay for AI tools out of pocket, and 44% of US businesses now pay for AI tools (up from 5% in 2023) – with average contracts around $530,000. 

This shows what most of us are already experiencing in our everyday lives – that AI tools have gone mainstream, and the focus now is moving towards models that can actually reason (not just generate). 

From pattern machines to structured thinkers

This need for genuine reasoning shows up strongly in recent research. One 2025 survey argues that while modern deep learning has achieved “remarkable success in perception tasks”, it still “falls short in interpretable and structured reasoning.”

To address that gap, the authors point to neural-symbolic AI: architectures that integrate symbolic logic with neural computation to unify learning and reasoning. They introduce a three-dimensional taxonomy of reasoning paradigms and review advances such as differentiable logic programming, logic-aware transformers, and LLM-based symbolic planning. 

In other words, the research community is no longer satisfied with pure pattern-matching. It’s actively rebuilding AI around structure, logic, and explanation. 

A new principle: think only when thinking helps

Of course, reasoning isn’t free. Recent large reasoning models (LRMs) can deliver dramatically better performance than standard LLMs – but at the cost of long ‘thinking traces’ and higher latency.

That’s the starting point for this 2025 study, titled ‘Think Only When You Need with Large Hybrid-Reasoning Models’. The authors introduce large hybrid-reasoning models (LHRMs), which are explicitly designed to decide when to reason and when not to. By using a two-stage training pipeline (hybrid fine-tuning followed by hybrid group policy optimisation) LHRMs learn to trigger extended thinking only for harder queries.

Their experiments show that LHRMs outperform existing LRMs and LLMs in both reasoning and general capabilities, while significantly improving efficiency.

It might be a subtle shift, but we think it’s important: because intelligence isn’t just about being able to think; it’s about knowing when thinking is worth the cost.

Neuro-symbolic hybrids: smaller, smarter, cheaper

Another 2025 study tracks the progress of frontier LLMs over an 18-month period using the PrOntoQA logical reasoning benchmark. The authors find that reasoning performance has clearly improved between December 2023, September 2024 and June 2025 – with a big jump linked first to hidden chain-of-thought prompting, and later to the introduction of dedicated ‘thinking models’.

But they also quantify the trade-offs: more tokens and more FLOPs. To address this, they propose a neuro-symbolic architecture where LLMs with fewer than 15 billion parameters translate problems into a standardised form, then a Z3 solver handles the actual logical satisfiability checking. This approach significantly reduces computational cost, while maintaining close-to-perfect performance on the task.

So smarter reasoning doesn’t have to mean ever-bigger models. It might instead come from carefully combining smaller models with symbolic tools.

Industry: towards hybrid, governable AI

This reasoning-first trend isn’t confined to papers. In financial services, for example, techUK recently published an article on the evolution of AI reasoning, which describes how the sector is moving into a hybrid era, combining the language capabilities of LLMs with deterministic graph-based inference to meet regulatory and compliance demands.

The article traces a familiar arc: from early rule-based expert systems (predictable and explainable, but brittle), through powerful but opaque probabilistic models, to hybrid architectures where deterministic knowledge graphs enforce traceability, consistency and compliance. 

For regulated institutions like banks, this is important – because regulators are increasingly focused on fairness and accountability in AI use. Hybrid reasoning systems (where every decision can be traced back to explicit rules) offer a route to powerful yet governable AI.

A structural shift in our expectations of AI

These different pieces of research do paint a coherent picture. We don’t think this is just another turn of the hype cycle. It looks more like a structural shift – from AI that can write and make images, to AI that can reason reliably, efficiently, and under human oversight. 

So we want to know – what’s the first real-world problem you’d hand over to reasoning-capable AI? 

Related
articles