The Paris Journal on AI & Digital Ethics

From Black Box to Glass Box: Leveraging Blockchain to Audit AI Systems Through Multistakeholder Participation

Roberto Caparroz de Almeida¹, Lucas Maldonado Latini²

DOI : 10.65701/m4qf7v2k1n

Corresponding authors:
roberto.almeida@fgv.br

Abstract

A pervasive trust deficit plagues AI’s widespread adoption due to its “black box” nature, an opacity that makes it challenging to understand how certain outputs or decisions are reached. Previous stud-ies have looked at the use of blockchain technology for AI audits. However, they lack the knowledge to make a multistakeholder approach feasible. To address these shortcomings, our research proposes a framework that harnesses the power of blockchain technology for AI auditing by including several stake-holders. Blockchain’s core attributes of decentralisation, immutability, and cryptographic security offer the opportunity to record and verify AI processes transparently. Specifically, we suggest the implemen-tation of a blockchain-based audit trail which stores cryptographic proofs detailing model parameters, data sources, and algorithmic updates at various checkpoints that different actors could verify. This shared verification process democratises oversight, mitigating the concentration of control in the hands of a single authority or company, or even a few local authorities or companies. Taken together, our analysis argues that blockchain auditing is a crucial strategy for transforming AI from an opaque, cen-tralised “black box” into a collaborative, verifiable “glass box”. A central advantage of our proposal is its capacity to foster multistakeholder governance. In a decentralised ledger environment, diverse partic-ipants, including regulators, academic institutions, professional bodies, and civil society representatives, can collectively validate AI systems’ integrity.


Scroll Top