If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.
Last week, we wrote about a major AI hallucination case in US law. And it got us thinking about the new AI careers that are emerging all the time.
Because working in AI doesn’t mean you have to write code or build models anymore. Today, a new kind of work is becoming increasingly important – the work of checking what AI produces.
That legal hallucination case involved a high-stakes legal filing, prepared with the support of AI tools. It included fabricated citations and misquoted legal text. The firm already had AI policies in place, and it had leading law experts in the loop, and review processes in place.
And yet those errors still made it into court.
This represents more than a single mistake. It marks the rise of the verification economy – and with it, a new category of AI careers.
We’ve all been spending a lot of time thinking about AI that can generate.
But as adoption scales, the verification layer is just as important. Who verifies those outputs, and how?
Far from a niche concern, this is becoming central to how organisations operate. When AI is used in legal filings, financial models, medical research, or policy analysis, the cost of being wrong is high. That shifts verification from a final check to a core function.
As past DeepFest speaker and AI expert Lee Tiedrich told us in an interview,
“Society faces the grand challenge of unlocking AI’s tremendous promises while also safeguarding against its harms and risks.”
Verification is one of the places where that safeguarding happens.
In practice, this means the AI workforce is expanding beyond engineers and researchers – into roles focused on trust and validation.
We’re already seeing early versions of these roles emerge:
These roles don’t always require deep model-building expertise. They rely on critical thinking, domain knowledge, and the ability to understand how and why AI fails.
If you’re interested in working in AI, this opens up a different entry point. You don’t have to be training large language models to be part of the AI ecosystem. You can be the person who ensures those models are used responsibly and effectively.
When we interviewed Roman Yampolskiy (AI Author and Director at Cybersecurity Lab, University of Louisville), he was clear about what’s needed:
“There is a need for more proactive engagement, rigorous safety research, and ethical considerations integrated into the AI development lifecycle.”
That lifecycle doesn’t end when a model is deployed. And as organisations integrate AI into core operations, the demand for people who can interrogate AI will grow quickly.
Instead of focusing on learning to build with AI, aspiring professionals could lean into the question of how they can challenge AI.
That means:
In the verification economy, these are becoming valuable skills.
We’re still early in the development of this new careers space. Many of these roles are informal, embedded within existing jobs, or evolving in real time – but that won’t last.
As regulation develops, and as high-profile incidents continue to surface, organisations will formalise verification functions – with clearer job titles, career paths, and expectations.
So as AI continues to generate more of the world’s content and decisions, take a moment to ask yourself who you want to be: the person who produces AI outputs, or the person who makes sure the outputs are right?
Open this newsletter on LinkedIn and tell us in the comments – where do you see the biggest opportunities in AI careers?