The Paris Journal on AI & Digital Ethics

A Socioinformatic Analysis of Community-based Fact-Checking: Why Community Notes is Not as Effective as X Claims

Julian M. Pelloth¹, Lena Pölzer¹, Katharina A. Zweig¹

DOI : 10.65701/k1q7n3p8t0

Corresponding authors:
julian.pelloth@edu.rptu.de • dem69pod@rptu.de • zweig@cs.uni-kl.de

Abstract

The Community Notes developed by X aim to fight misinformation by enabling users to add context in the form of notes beneath posts. The system employs an algorithm using a modified matrix factorization to determine which notes are displayed. It publishes notes that receive agreement on helpfulness from raters who, according to their past rating behavior, have different “points of view”. Our analysis of publicly available Community Notes data reveals that only a small fraction of notes ultimately receives a ”helpful” or ”not helpful” status, while the majority remain in the intermediate status of ‘needs more ratings’. This shows that Community Notes is ineffective in adding notes and, therefore, lacks effectiveness in moderating posts and preventing the spread of misinformation.

To investigate the underlying reasons for this phenomenon, we apply a socioinformatic analysis. This method analyzes socioinformatic systems formed by interactions of social actors and their motivations with a software system. These interactions are modeled using a so-called effect network, allowing us to assess whether the ineffectiveness of the Community Notes emerges from how social actors engage with the system.


Our analysis explains the ineffectiveness of Community Notes in adding notes to (misleading) posts with the assumption that most raters are rating the note whether they align with their beliefs and not according to their helpfulness. The lack of notes undermines the goal of Community Notes to prevent the spread of misinformation. Finally, we identify several research questions that arise from our analysis.

 

Scroll Top