
Welcome to the 14 new deep divers who joined us since last Wednesday.
If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates and interviews with industry experts straight to your feed.
We all know that AI is only as good as the data it’s trained on; and that biased data can influence dangerously flawed decision-making, with real-world consequences.
So how do we fix this?
A growing number of voices are speaking up about the potential of blockchain to solve AI’s bias problem.
One is the backbone of crypto, the other the brains behind your smart assistant. But together, blockchain and AI could form a powerful duo in the fight for trustworthy, accurate, and equitable AI systems.
To dig into this, we spoke with Marcello Mari (Founder and CEO at SingularityDAO), who is passionate about the intersection of decentralisation and intelligent systems:
“Yes, blockchain can help.”
That was Mari’s immediate response when we asked if blockchain could help combat bias in AI.
“Yes,” he went on, “the transparency of blockchain data can play a significant role in combating bias within AI systems. Firstly, because blockchain’s immutable ledger ensures that all data inputs and AI model changes are recorded transparently.
“This allows stakeholders to trace the origins of training data, monitor the evolution of the model, and audit decision-making processes. By identifying bias in the data or algorithms, corrective actions can be taken more efficiently.”
Let’s break that down a bit – and look at six specific ways blockchain can improve the quality and integrity of data for AI.
AI systems are often described as ‘black boxes.’ We know what goes in, and we see what comes out, but what happens in between can be murky. Blockchain brings light into that box.
“Blockchain’s immutable ledger ensures that all data inputs and AI model changes are recorded transparently,” said Mari. “This allows stakeholders to trace the origins of training data, monitor the evolution of the model, and audit decision-making processes.”
In practice, this means if an AI makes a poor decision (like unfairly denying a loan) you can go back and check what data it was using and what changes were made along the way. That’s a big step toward accountability.
Bias often starts at the data level. If your AI is trained on incomplete or skewed data, its decisions will reflect those flaws. Blockchain can verify where data came from and whether it’s reliable.
“Bias in AI often stems from flawed or incomplete datasets. Blockchain can verify the provenance of data, ensuring it comes from diverse, trustworthy sources,” Mari explained.
This is especially crucial in sectors like healthcare, where data accuracy is a matter of life or death. By using blockchain to certify and validate data sources, we can train AI systems on stronger foundations.
One of the biggest problems in AI is data concentration. A few big tech companies control most of the world's data, which means their biases get baked into the AI they create.
“Blockchain can facilitate decentralised data marketplaces where diverse datasets are securely shared and accessed,” Mari noted. “This can democratise access to data, ensuring that smaller organisations and underrepresented groups contribute to and benefit from the AI ecosystem.”
This shift towards open, decentralised data sharing could give voice to communities that are often excluded – and in doing so, create more representative AI systems.
It's one thing to trace the data used to train an AI. But what about the decisions the AI makes after deployment?
“By recording AI model outputs, decisions, and reasoning on the blockchain, biases in decision-making processes can be identified and challenged,” said Mari. “For example, if an AI system denies a loan or screens job applications unfairly, blockchain-based records can reveal whether the decision was influenced by discriminatory factors.”
This kind of post-hoc transparency is key to building trust in AI. It’s about proving what went right just as much as understanding what went wrong – so we can continue to iterate and improve AI systems.
AI oversight shouldn’t be left to a handful of executives in a boardroom. Blockchain enables something more democratic.
“Blockchain enables decentralised governance models where communities can oversee AI systems and raise concerns about bias. Smart contracts could enforce rules for ethical AI behaviour, ensuring adherence to fairness standards.”
This kind of decentralised governance could allow the public to challenge harmful AI outcomes in real time – and even help shape the rules that define what’s fair.
What if developers got rewarded not just for building AI, but for building it right?
“Through tokenised incentives, blockchain can reward data contributors and developers who prioritise fairness and reduce bias in their datasets and models. This could foster a culture of ethical AI development,” Mari said.
Imagine earning tokens for contributing unbiased data to a training set, or for flagging a harmful AI behaviour. That kind of system could align financial incentives with ethical outcomes.
The potential here is huge. According to a 2024 report by PwC, the integration of blockchain and AI could “enable data integrity, model accountability and decision transparency at a scale never before possible”.
Another study by IBM Research highlights how blockchain-based data provenance systems are already being tested to improve trust in AI healthcare applications.
It’s clear that combining the transparency of blockchain with the power of AI could help shift us away from opaque algorithms and toward a future of accountable, ethical intelligence.
As Mari put it, “the transparency of blockchain data can play a significant role in combating bias.” In a world where AI is increasingly making decisions that affect all of us, that transparency is essential.
Moving towards tech that works for everyone
Moving towards tech that works for everyone