Could open models be safe enough to trust?
GenAI guardrails are evolving – and they might change the open vs closed debate
Welcome to the 7 new deep divers who joined us since last Wednesday.
If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.
Your weekly immersion in AI
We’ve spent the last three years grafting large language models onto the edges of our software – chatbots in sidebars, copilots in toolbars, APIs stitched into legacy stacks. But an architectural revolution is underway. The next generation of systems won’t bolt AI on; they’ll be built around it.
These are GenAI-native systems: software with architecture, logic and behaviour that are co-developed with a generative brain at the core.
A 2025 paper on the foundational design principles for building GenAI-native systems captures this moment of change. Its authors argue that we’re entering an era where models are not peripheral components but first-class citizens. They propose a new vocabulary for system architects:
In this design philosophy, traditional code handles deterministic tasks such as validation or compliance, while generative modules handle reasoning, adaptation and creativity. A dynamic router decides which layer should take the lead, based on cost, confidence or intent.
What emerges is an adaptive, hybrid architecture in which the boundaries between ‘the system’ and ‘the model’ dissolve into a single, coherent whole.
We know our readers like to see the use cases to back up the theory; so here’s what we’ve found.
You can already see GenAI-native patterns in production systems. GitHub Copilot, for instance, has evolved from autocomplete novelty to embedded partner. Studies from GitHub and Accenture suggest developers can complete coding tasks up to 55% faster when using it – not because it adds speed to old workflows, but because it reshapes those workflows entirely.
In the consumer sphere, OpenAI’s GPT-4o is a striking example of a unified conversational interface: a real-time, multi-modal system that merges voice, text and vision fluidly.
Likewise, Apple’s Private Cloud Compute, introduced at WWDC 2024, routes user requests between on-device and cloud models depending on privacy and latency – effectively a programmable router operating at the level of the operating system.
And enterprise frameworks are following suit. LangGraph and Microsoft’s Semantic Kernel allow developers to orchestrate complex reasoning chains across multiple models, giving rise to agentic state machines that can plan, verify and correct themselves. Within Azure AI Foundry, these capabilities are being woven directly into cloud pipelines – an early sign of generative intelligence treated as infrastructure rather than add-on.
Even robotics is beginning to show what GenAI-native means in physical space. DeepMind’s RT-2 model, for example, unifies vision, language and action so tightly that perception and control merge. Robots no longer just execute commands; they interpret them, reason about context and adapt behaviour. In industrial settings, projects such as RoboBallet demonstrate multi-robot coordination emerging from shared generative understanding rather than rigid programming.
This architectural rethink moves attention away from ever-larger models towards better systems. The next wave of progress will come from designing coherent ecosystems that combine generative creativity with symbolic precision. It’s a mindset change: from prompt-crafting to system-crafting.
It also offers a richer vocabulary for engineers and architects. Concepts like cells, routers and substrates become tools for reasoning about how intelligence flows through a system. And this perspective allows for more fluid evolution: teams can upgrade one module, retrain another, or reroute logic dynamically without breaking the system’s integrity.
We’re moving from a world where the model serves the system, to one where the model is the system. That’s a philosophical shift as well as a technological one. When software begins to exhibit generative coherence, it stops behaving like static code – and starts resembling a living organism. It becomes adaptive, contextual and continuously evolving.
Are you taking steps away from GenAI models layered on top of your systems, to building systems around GenAI? We want to know why and how you’re doing it.
GenAI guardrails are evolving – and they might change the open vs closed debate
GenAI guardrails are evolving – and they might change the open vs closed debate