The battle ahead for AI and financial crime

The battle ahead for AI and financial crime

Welcome to the 6 new deep divers who joined us since last Wednesday.

If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.


DeepDive 

In 2025, the financial services industry is no longer asking if AI belongs in the fight against financial crime – only how fast and how responsibly it can be deployed. 

Feedzai’s latest State of AI report captures this shift with precision. It offers a detailed look at how banks, fintechs, and regulators are deploying AI to stay ahead of an increasingly sophisticated threat landscape.

Drawing from a global survey of more than 560 senior professionals across financial services, the report provides both a strategic snapshot and a wake-up call. 

We’ve read it so you don’t have to – and pulled the key insights that will shape the road ahead.

AI adoption Is now mainstream, but not mature 

The report confirms that 87% of financial institutions have already integrated AI into their financial crime strategies. That’s a clear sign that AI has moved beyond experimental pilots – it’s now embedded in day-to-day fraud detection, AML processes, and transaction monitoring systems.

But the maturity of that adoption varies. While many organisations have rolled out machine learning for risk scoring and anomaly detection, far fewer have deployed real-time, adaptive AI pipelines capable of responding to novel threats on the fly. 

Feedzai’s researchers highlight this gap as both a challenge and an opportunity – particularly as criminals begin to adopt generative AI for more complex and scalable attacks.

GenAI is powering a new wave of criminal innovation 

Our colleagues over at Black Hat MEA talk about the risks of AI in cybersecurity a lot, including (but not limited to) the security of financial organisations. Here’s a recent article they wrote about the rising dangers of AI agents. 

One of Feedzai’s most sobering findings is that criminals are already using generative AI tools to create synthetic identities, deepfake documentation, and highly personalised phishing campaigns. Nearly 80% of respondents said they believe generative AI will increase the overall volume of fraud attempts.

Financial institutions are already seeing signs of this shift. Traditional fraud detection tools, which often rely on static rule sets, are ever more inadequate against AI-generated threats that evolve faster than manual systems can adapt. 

As one survey respondent put it, “Fraudsters are automating everything – why wouldn’t we?”

Explainability and governance are the next frontier

While excitement around AI’s potential in the financial sector is high, concerns about explainability, bias, and governance remain front and centre. Feedzai’s data shows that financial institutions are under pressure to not only use AI effectively, but to justify how and why their models make certain decisions.

This is particularly critical in the context of financial crime. False positives can frustrate customers, and false negatives can expose institutions to regulatory fines. So some firms are now building explainability frameworks into their AI systems – to make sure model decisions can be traced, audited, and defended both internally and to regulators.

Privacy and security are top concerns 

With AI systems increasingly relying on vast and diverse data sets, data privacy and security have become central to the AI conversation. Over 60% of survey respondents listed these as top concerns. Ensuring that AI tools meet regulatory standards (such as GDPR in Europe, or financial sector compliance rules) is critical. 

So what can financial institutions do to improve trust? 

Feedzai encourages them to invest in secure model architectures, strong access controls, and privacy-preserving machine learning techniques. These safeguards will be incredibly important for maintaining customer trust in an era where data misuse can erode brand value overnight.

A call to collaborate 

Possibly the most encouraging insight from Feedzai’s report is the growing recognition that AI in financial crime prevention is not a zero-sum game. We need collaboration across institutions, regulators, and vendors to spot emerging threats early and share best practices.

The report notes that cross-industry data sharing (where privacy permits) is becoming a key enabler of more accurate and robust AI models. Federated learning, for example, is emerging as a powerful tool for training models across institutions without exposing sensitive data.

AI capabilities are now part of our present reality. But to use AI well, we need to work together to build transparency, accountability, and collaboration across the entire financial ecosystem.

The tech is here – but trust, governance, and alignment will decide who tips the balance in their favour when it comes to AI and financial crime. 

Related
articles