Thesis title: Dalla sovranità digitale europea all’ autosovranità individuale: l’algoritmo attraverso il prisma del diritto all’ identità personale
The research begins by examining the foundational principles underpinning the European project of digital sovereignty, highlighting how it is conceived as an instrumental condition for safeguarding fundamental rights rooted in the shared constitutional heritage of European states. Within today’s AI-driven ecosystem, the pervasive influence of digital technologies directly interferes with the exercise of rights tied to the ontological dimensions of personhood. These dynamics pose serious threats to the resilience and legitimacy of democratic institutions. It is precisely in response to these risks that the principle of individual self-sovereignty has come to represent the conceptual and political cornerstone of the European approach to digital sovereignty.
In this context, digital sovereignty emerges as a political tool to ensure the effectiveness of what is increasingly referred to as digital constitutionalism. Unlike traditional, pre-digital forms of constitutionalism—which primarily aimed to constrain public power in order to protect individual rights—digital constitutionalism assumes an unprecedented form: it responds to the “functional sovereignty” exercised by geopolitically significant private actors, most notably transnational digital platforms. These actors increasingly perform quasi-public functions, including the balancing and enforcement of fundamental rights. If digital constitutionalism is to be understood as the capacity to safeguard fundamental rights against private power, the key question becomes: how can this objective be effectively pursued in a complex, transnational, and multi-agent digital environment?
To address this question, the first part of the thesis explores the emergence of platform infrastructural sovereignty as a novel organizational form rooted in the logic of computation, and analyzes its progressive institutionalization within the EU’s evolving regulatory framework. Chapter One, drawing on critical Platform Studies—particularly the Amsterdam School—conceptualizes the 'platform model' and examines the dominant paradigms and evolving trends in platform governance. This sets the stage for understanding the 'Regulatory Turn' in Europe, where the EU positions itself as a 'third way' between U.S.-style industry self-regulation and state-centered, sovereigntist models of internet governance aimed at the reterritorialization of networks. Sovereignty claims, in this context, differ according to the form of self-determination they aim to protect. Since individual self-determination lies at the heart of the European approach—designed to guard against illegitimate interference from both public and private actors—Chapter Two investigates how this objective is operationalized through regulatory means, with particular attention to the EU’s digital strategy and the Digital Services Act (DSA), often referred to as “Europe’s new digital constitution.” The chapter examines: (1) the issue of platform responsibility in what is defined as the new regime of “neo-intermediation,” and (2) the regulation and implementation mechanisms related to algorithmic power.
The second phase of the research, adopting a concentric methodological approach and integrating legal and sociological perspectives, turns to the micro-level analysis of a key multi agent subsystem of the digital environment: the algorithm. In Chapter Three, a heuristic model is proposed to conceptualize the “Algorithmic Social System”—a socio-technical construct articulated along three dimensions: F.1. Data, F.2. Code, and F.3. People. This tripartite model focuses on Informational Recommender Systems (RS) as a paradigmatic case. It serves a dual purpose: (1) as a hermeneutic paradigm, offering a structured framework for systematizing and analyzing existing research on RS; and (2) as a heuristic-operational paradigm, functioning as an empirical tool for investigation. In this capacity, 20 in-depth interviews were conducted, focusing on three key factors: F.1—perceptions of personal data transfer and its perceived economic value; F.2—individual relationships with RS, particularly in terms of personalization; and F.3—awareness of the interactions between data and algorithms within machine learning systems.
This tripartite framework provides a lens for understanding how datafication processes and human-machine interactions influence identity formation and individual autonomy. By employing this paradigm, we can explore how algorithms—particularly RS—interfere with a constellation of fundamental rights, reframed here through the prism of personal identity. In doing so, the central normative question emerges: whether, and to what extent, we can conceptualize a shift from the GDPR’s privacy paradigm to a broader right to personal identity—one capable of safeguarding self-sovereignty and human agency in the age of pervasive AI.
Chapter Four, after critically evaluating through doctrinal review the limitations of the prevailing legalistic conception of the right to privacy, advances a dynamic understanding of data. This perspective moves beyond the static, objectified view of data as mere abstract entities, towards a model that emphasizes the impact of algorithmic data processing—regardless of data type—on the individual’s autonomy and identity construction. Incorporating moral philosophy, Chapter Five seeks to demonstrate how algorithmic interference affects the preconditions of freedom of choice and action—thus reshaping our understanding of human subjectivity. As a result, the debate must expand beyond the usual “privacy/data protection” framework. The thesis proposes to evolve beyond both static legal interpretations and informational self-conceptions of privacy, toward the definition of minimum conditions for a morally free process of identity construction.
Using the notion of personal identity as both an interpretive and normative criterion requires rethinking the ontology of data—particularly the legal distinction between “personal” and “non-personal” data. This entails assigning proper significance to so-called 'external data' (e.g., metadata, tracking data, behavioral surplus), which—though not immediately impactful within the legal domain—can exert long-term systemic effects on individual and collective cognitive, perceptual, and decision-making capacities.
In the conclusion of the chapter, the ethical problem of algorithmic recommendation is reassessed through the three dimensions of the personal identity prism: (1) epistemic, (2) socio-relational, and (3) moral. Through this lens, the chapter identifies key regulatory shortcomings and opens new avenues for inquiry—particularly given the limited number of studies that treat these issues as an autonomous field of investigation.
Chapter Six links the research findings to the emerging implications of General Purpose Artificial Intelligence Systems (GPAIS), particularly Large Language Models (LLMs). The conclusion outlines the ontological and normative foundations for a shift from the principle of informed consent—cornerstone of privacy regulation—to a broader, “trust-based model,” a negotiated and iterative decision-making process designed to strengthen the autonomy of human recipients. This model is proposed as a normative foundation for AI governance capable of protecting individual cognitive self-sovereignty within an emerging bio-hypermedia ecosystem.