The Paris Journal on AI & Digital Ethics

AI as Bureaucracy: The Standardisation of Judgment

Carina Prunkl¹,²

DOI : 10.65701/z4t0k6c1p9

Corresponding authors:
carinaprunkl@gmail.com

Abstract

AI-driven decision-making is often framed as an antidote to the flaws of human judgment (Garcia-Vidal, Sanjuan, Puerta-Alcalde, Moreno-García, and Soriano, 2019; Kahneman, Sibony, and Sunstein, 2021). It is praised for improving efficiency, ensuring consistency, and reducing personal biases. Critics, on the other hand, argue that AI systems — like humans — can replicate biases (Noble, 2018) and that they lack mechanisms of accountability that apply to human decision-makers. Most of these discussions compare AI-driven decisions to individual decision-making, discussing how AI either improves or replaces human judgment (Kahneman et al., 2021; Spaulding, 2020). This talk challenges that framing, arguing that the more appropriate comparison is with bureaucratic decision-making. AI systems exhibit deep structural similarities with bureaucracies, including rule-based standardisation, depersonalised authority, task specialisation, and a strong reliance on quantification. Recognising this parallel shifts the focus from AI as a substitute for human judgment to AI as a system that mirrors bureaucratic decision-making, with its own institutional logic and constraints.

This talk establishes the parallel by critically examining the shared structural features of AI and bureaucratic decision-making. First, both function through formalised rules and procedures, requiring a prior establishment of categories and quantification. Second, bureaucracies justify their authority through such procedural formalism and quantification (Porter, 1996), presenting their decisions as neu-tral and objective. Similar properties are often ascribed to AI systems, which are praised for their impartiality (no contextual values), reliability, and temporal invariance (Creel and Hellman, 2022; Garcia-Vidal et al., 2019; Kleinberg and Raghavan, 2021). Third, as a result of the strict adherence to rules, bureaucracies and AI systems both limit case-by-case discretion in favour of standardisation and consistency. This emphasis on rules, however, does not eliminate human discretion entirely; rather, it constrains and redistributes it (Dworkin, 1978; Hawkins, 2022; Hupe, 2013). In bureaucracies, discretion may shift to a higher level (e.g. rule-makers) or be restricted by narrowing the choices available to offi-cials (e.g. judges used to enjoy more discretionary freedom in the past) (Goodin, 1986). In the context of AI, discretionary decisions are mostly embedded in system design, including data selection, model parameters, or optimization goals. Discretion of frontline actors is often constrained (see e.g., (Riso and others, 2021)). Fourth, AI-driven decision-making and bureaucratic decision-making are often task specific, optimized for particular functions rather than general reasoning. Bureaucracies divide labour through specialized departments, while AI models are typically trained for specific domains, reinforcing their limited adaptability. Finally, both bureaucratic and AI-driven decision-making obscure personal accountability, though they do so in different ways. While bureaucracies maintain formal chains of responsibility, AI systems often introduce additional layers of opacity, thereby exacerbating the issue (Selbst and Barocas, 2018). These and other structural parallels make bureaucracies a more fitting comparison for AI than individual human judgment, which, among other things, relies on contextual reasoning, adaptability, personal accountability, and tacit knowledge.

 

Scroll Top