Thesis title: Infrastrutture di disuguaglianza: un’indagine socio-tecnica sui bias di genere ed etnici nei sistemi di intelligenza artificiale
This doctoral project investigates gender- and race-related biases in algorithmic systems by adopting a socio-technical perspective that recognizes these technologies as social agents and infrastructures of inequality capable of reproducing and amplifying historically rooted disparities. Situated within a context of growing scholarly attention to the discriminatory dynamics triggered by processes of datafication and automation, the research aims to explore the contextual and situated dimensions of algorithmic bias, rejecting reductive interpretations that treat such biases solely as technical anomalies or procedural deficiencies (Airoldi, 2022). On the contrary, the dissertation proceeds from the assumption that these distortions are intimately intertwined with the power relations that shape society and with the persistence of logics of exclusion and marginalization that also permeate technological practices (Costanza-Chock, 2020; D’Ignazio & Klein, 2020; Noble, 2018).
The dissertation is organized into three chapters. The first chapter contextualizes the analysis of algorithmic bias within platform society, with reference to foundational studies that have informed sociological research in this field and with specific attention to intersectional feminist and decolonial perspectives. It details, in particular, how these systems, through processes of classification and optimization, influence the dynamics of recognition and access to resources and opportunities. In this scenario, algorithmic technologies do not merely reflect the social; they actively constitute it, acting as actants that shape symbolic, cultural, and economic relations (Crawford, 2021; van Dijck, Poell & de Waal, 2019).
The second chapter presents a scoping review conducted according to the PRISMA protocol (Page et al., 2021), which made it possible to map the academic literature on gender- and race-based biases embedded in ML/AI technologies and to highlight conceptual fragmentation and the predominance of specialist perspectives that tend to exclude the viewpoints of those directly involved in the design of such systems. The absence of shared definitions for concepts such as bias, equity (fairness), and inclusivity hampers the development of theoretical and operational tools capable of addressing the complexity of discriminatory phenomena (Hall & Ellis, 2023; Kekez et al., 2025). This gap is all the more significant given that these concepts are often articulated in the abstract, without adequate anchoring in the experiences and positions of practitioners and historically marginalized groups.
Building on these results, the next chapter presents a thematic analysis (Braun & Clarke, 2006) of data gathered through 35 interviews with developers of AI-based technologies. The testimonies once again reveal definitional vagueness and conceptual overlap among the notions of bias, equity, and inclusivity, as well as a generally superficial awareness of algorithmic discrimination. In some cases, narratives of technological neutrality persist (Airoldi & Gambetta, 2018; Natale & Ballatore, 2017).
In light of these analyses, the concluding chapter integrates theoretical and empirical evidence to outline a definitional synthesis of the three key concepts and to propose a set of macro-principles and guidelines for mitigating algorithmic inequalities, articulated across five interdependent levels—data, teams, models, users, and context. The objective is to move beyond current fragmentation and to orient design toward intersectionality, proximity, decolonization, and liberation.
The entire trajectory is conceived from a longitudinal perspective, attentive to iterative cycles of design and evaluation, capable of observing how normative definitions and operational practices are reconfigured as organizational structures and the technological ecosystem evolve.