Auditing the Black Box: How Blockchain Technology Can Make AI More Transparent

Artificial intelligence powers more of our daily lives than ever, yet the systems behind it remain largely closed, raising urgent concerns about bias, accountability, and trust. This blog explores how blockchain’s unique properties—immutability, transparency, and decentralization—can enable collaborative auditing frameworks that make AI more open and representative. By combining off-chain analysis with on-chain verification, blockchain offers a path toward building AI models that are auditable, democratic, and better aligned with human diversity.

by
No items found.
October 3, 2025

Auditing the Black Box: How Blockchain Technology Can Make AI More Transparent

Millions of people use artificial intelligence models every day—at work, in their studies, and increasingly for internet searches. Interacting with these models helps us produce more in less time, and the quality of the responses is often impressive. However, this sense of wonder masks a fundamental problem: the fact that large commercial AI models are true "black boxes," as we do not know how they were trained, what sources were used, or how they produce certain responses. 

In a scenario where AI becomes increasingly ubiquitous in our lives, we must have access to what is really behind the technology that surrounds and guides us. We therefore need transparent auditing mechanisms capable of identifying and correcting this information asymmetry. Current verification methods are clearly insufficient to meet this challenge.

A closed design

Large commercial models offer us ready-made and seductive solutions. The problem is that they are, by definition, closed. No one outside the tech giants has real access to the data or training procedures. We live under a 'closed design'—a shield that prevents any external audit of the biases and flaws that these systems may inherently possess.

Existing audits cannot cope with this level of complexity. The model cards offered by companies, while informative, are overly technical and of little use to civil society. Clear technical and regulatory standards need to be defined for the use of auditable logs in AI.

The use of blockchain technology

Technology provides powerful tools for defining parameters and standardizing procedures. The use of blockchain and its inherent characteristics offers a viable solution for the development of a broad and inclusive audit framework.

First, transparency: distributed and immutable records enhance public trust, allowing different institutions to access a verifiable history. Second, accountability: by clearly identifying responsibilities and incorporating correction mechanisms, blockchain fosters an environment of shared governance. Finally, there are gains in plurality, as decentralized registration enables multilingual initiatives in regional contexts, reducing the almost absolute dependence on English in model training.

Yet, significant challenges remain

We cannot use blockchain as a comprehensive repository for audited data, as this is technically unfeasible. We need a hybrid solution, in which data analysis occurs off-chain and only the results are recorded on the blockchain to ensure its integrity.

We propose using blockchain technology as a collaborative infrastructure for auditing AI models, with significant benefits:

  • Provenance and Traceability: Blockchain ensures full traceability of data and model history, which are essential for verifying the quality and origin of the information used to train the model.
  • The distributed nature of blockchain facilitates the participation of various actors, such as regulators, academics, AI developers, and civil society itself.
  • Correction of biases and the ability to adapt models to diverse "worldviews": we need AI to be reliable, accessible, and representative, regardless of users’ language or cultural context.

The future must be auditable

The convergence between blockchain and AI to create an auditable and collaborative standard for models is a fundamental step. We must turn opaque and virtually inaccessible models into transparent and inclusive systems. This is a democratic imperative, necessary for the preservation of human diversity in the age of artificial intelligence.

There is also an existential challenge: leading scientists in the field believe that Artificial General Intelligence (AGI) may be five or ten years away. And that changes everything: on the one hand, much of the current discussion about Artificial Intelligence can be addressed with the existing regulatory and normative framework. Complex though they are, issues such as civil liability, damages resulting from biases, and copyright, can be resolved within the current legal framework.

However, the advent of AGI will be unprecedented in history: if it materializes, for the first time we will coexist with entities that are faster, smarter, and more powerful than us - and only a massive collective effort on a global scale can prepare us for this challenge.

The time to act is now.