Panel 3: Between Connection and Alienation

  • Towards a Theory of Recognition from Artificial Others.

    By Rosalie Wahlen (IWE Bonn)

    Anyone who’s ever interacted with ChatGPT or other LLM-based chatbots knows how generous these systems are with their compliments about your “astute observations” and “excellent questions.” How and why are these instances of recognition different from the recognition we receive in interactions with other human beings? To answer this question, this presentation will explore the ontological and normative nature and relevance of the kinds of recognition that LLMs display when humans interact with them.

    The concept of recognition is an important one in political philosophy and particularly critical social theory. According to theories of recognition, relations of recognition have a significant normative dimension, because a person’s identity, self-worth, autonomy, and freedom relies on the recognition they receive externally. As a relational concept, recognition places the conditions for human flourishing and social justice not with the individual, but in social relations and the structures that enable them. Given the moral importance of successful recognition relations, on the one hand, and the fact that humans increasingly build social bonds with AI systems such as LLMs, on the other hand, it is of utmost importance to rethink recognition theory.

    Most theories of recognition emphasize the importance of genuineness for meaningful recognition relations. The underlying assumption here is that, in a recognition relation where A recognizes B, the recognition received by B is meaningless if A’s recognitive acts were not supported by genuine recognitive attitudes. Through a conceptual discussion of reciprocity, genuineness, and functionality, I argue that humans can experience the acts of artificial agents such as LLMs as recognition and that these experiences of recognition can have morally relevant consequences. I introduce the category of functional recognition and argue that contemporary recognition theories should prioritize the subjective experience of moral patients over the ontological nature of moral agents.

  • The Problem of Fading Identity and Human Dignity. Reincarnation and AI Replicas in the Age of Large Language Models.

    By Ferdinard Fosu-Blankson (Ghana)

    A reincarnated individual in African society demonstrates an evolving identity: an initial identity that shares significant identification with a deceased individual or relative, and another identity that does not necessarily identify with the former but is more an actualized identity of a mature, experientially evolved person. A similar pattern emerges in digital replicas built from personal data. At early stages, AI avatars may convincingly reflect a person’s personality, language, and emotional style, but ongoing learning and adaptation cause increasing divergence from the original identity, resulting in an autonomous system no longer tied to the human referent.

    The paper investigates a shared philosophical problem in African reincarnation belief systems and contemporary AI avatars powered by large language models (LLMs): the gradual fading of personal identity within presumed continuity, a loss of human dignity. This comparative study offers a policy recommendation to the Data Protection agencies and authorities to act and resolve the problem of fading identity. The European Union AI Act and UNESCO AI Ethics Recommendation go against the assumption of AI systems being legally recognized as persons with identity, rights, or moral status.

    This paper makes a conceptual analysis of what level of change that deflates the initial identity and character assumption, whether the replicas gain or preserve human dignity? The paper further argues that both reincarnation and AI replicas expose a structural illusion in identity-transfer models: continuity of form, data, or memory does not guarantee continuity of self.

    The analysis contributes to debates on AI personhood, digital immortality, and cultural ontology by demonstrating that neither spiritual reincarnation nor technological replication succeeds in preserving the self they seek to carry forward. Otherwise, how can AI replicas sustain a human identity or dignity as they fundamentally evolve as LLMs?

  • Rule by Algorithm. A Bureaucratic Horror Story.

    By Alexandria Irimia (Bonn) and Jonathan Foster (Stockholm)

    In “Rule by Algorithm: A Bureaucratic Horror Story,” Alexandra Irimia and Jonathan Foster trace the transformation of bureaucratic rationality into a new, algorithmic order that reanimates old anxieties about impersonal power and inscrutable authority. The authors argue that while technological innovation has long promised to purify governance from human error, each wave of administrative modernization has also generated its own genre of horror—a cultural response to the dread of being governed by faceless systems. AI, they suggest, represents the latest manifestation of this affective unease: rules without rulers, authority without empathy, and a machinic opacity that mocks the ideal of rational transparency.

    Drawing on the literary imagery of Kafka and Dickens alongside contemporary examples of algorithmic decision-making, Irimia and Foster reveal how the “bureaucratic horror story” has migrated from the dusty archives of paperwork into the digital architectures of code. Their analysis resonates with Henry Farrell and Marion Fourcade’s notion of high-tech modernism—the fusion of datafied hierarchy and market feedback that now threatens to underpin everyday governance. Like the rationalizing bureaucracies of the twentieth century, algorithmic infrastructures classify, sort, and act upon human lives, but their logics are less legible and more intimate, woven into the sensory textures of daily existence.

    The authors suggest that the affective charge of this new bureaucratic condition stems from its paradox: a system that feels omnipresent yet remains inscrutable, coldly efficient yet emotionally destabilizing. In tracing these atmospheres of dread and disenchantment, “Rule by Algorithm” exposes how the pursuit of perfect rationality in governance continually produces its own forms of irrational fear and existential uncertainty.

  • "Irrational" Intimacies. Youth and AI Companions.

    By Neely Myers (SMU)

    Large language model-based AI companions are increasingly embedded in young people’s emotional lives. Drawing on a mixed-methods project that includes a US-based survey of 1800+ nationally representative youths ages 10-25 and related ethnographic data from interviews, focus groups, and conversations with youth and parents, this paper examines how young people use AI companions to negotiate the seemingly “irrational” experiences of adolescence. These lived experiences may include cultural confusion (especially for young people whose parents are immigrants who “just don’t understand” American culture), the ins and outs of human relationships loneliness, fear, and even suicidality or odd beliefs. Youths are using AI companions to talk about things they cannot—or will not—share with others.

    Forming intimate relationships with these “alien” intelligences without meaningful guardrails or awareness of how such AI are designed to be empathic and sycophantic, can be especially challenging for youths grappling with moral dilemmas, social development, and mental health issues. In these interactions, we are beginning to see a non-rational feedback loop that has been harmful for some young people facing developmental, societal, and cultural pressures. AI oTers a new space for experimenting with identity, intimacy and even managing conflict with other humans, but lacks a moral compass and so far, does not reliably direct youth toward supportive relationships with people who can help. These dynamics have implications for social cohesion, intergenerational trust, and youth social and moral development and require further research. This paper will help bring young people’s voices directly into your conversations about AI and its profound implications for emotional life and social, political and ethical thought and action.