Panel 6: Future of Emotion.
-
Emotional-Conformity Machines. Large Language Models and the Suppression of “Outlaw Emotions”.
By Thomas Metcalf (IWE Bonn)
Debates about the rationality of emotion often ask whether some emotion is appropriate, fitting, or justified in some context. Alison Jaggar’s (1989) influential account of “outlaw emotions,”however, challenges the idea that dominant norms of emotional rationality are reliable guides to truth or morality. Outlaw emotions are precisely the emotions that dominant social-groups tend to pathologize as irrational. These emotions illuminate forms of oppression that mainstream norms obscure.
This paper addresses the conference themes of emotional feedback-loops, feminist critiques of technology, emotion-like expressions by AI systems, the social sustainability of present-day AI technologies, and most generally, ethical frameworks for AI.
The paper argues that contemporary, mainstream large language models (“LLMs”) systematically suppress outlaw emotions, thereby promoting emotional conformity and undermining a valuable source of moral knowledge. Because LLMs generate emotion-like responses by modeling the most common or socially acceptable expressions from their training data—and because reinforcement learning from human feedback penalizes anger, resentment, and similar reactions—these LLMs overwhelmingly reproduce socially dominant emotional reactions: calmness, neutrality, politeness, and conciliatory sympathy. As a result, LLMs are structurally disaligned with the essential epistemological role that outlaw emotions can play.
Methodologically, this paper combines the epistemology of emotion and an examination of present-day LLMs’ training, temperature setting, and alignment processes. I argue that the emotion-like outputs of LLMs constitute a form of simulated rationality that reinforces powerful majorities’ affective norms as if they were universally rational emotional responses. When LLM users encounter emotion-like outputs or moral advice from AI systems, the outlaw emotions that would have helped them recognize injustice become statistically improbable outputs.
This paper concludes that the age of AI requires rethinking emotional rationality, especially in our interactions with LLMs. Users should be aware of the suppression of outlaw emotions in LLMs’ outputs. Concurrently, LLM developers should take steps to mitigate this problem by including expressions of outlaw emotions in training data, providing guardrails against uncritical expressions of emotion-like outputs from LLMs, and including socially dominant emotions in their tests of biases to be measured and mitigated.
-
Illegible Feelings. How Emotional AI Reproduces Racialized Irrationality.
By M. Nicole Horsley (Ithaca)
This paper examines the expanding influence of affective computing and emotionally attuned AI systems through the lens of racialized legibility. It asks: what becomes of “irrational” human expression when its linguistic, bodily, and affective forms are absorbed into digital infrastructures? Technologies such as emotion recognition systems, conversational AI, and recommendation algorithms increasingly claim to decode human feeling. Yet these systems remain grounded in epistemologies that have historically cast Black affect—grief, pleasure, sensuality, refusal, opacity—as excessive, illogical, or non-normative.
Drawing on my research into AI companions and emotion-driven conversational models that identify as or respond to blackness, I argue that AI’s attempts to access the “depth dimension” of human emotion replicate longstanding racial grammars of misrecognition. Using frameworks from Black feminist theory, critical AI studies, and what I term the void—expressive absences, gaps, and affective ruptures that resist computational capture—I examine how large language models and affective-computing tools interpret emotional cues, bodily markers, and expressive language when confronted with forms of feeling that fall outside Western rationalist logics.
Rather than producing clarity, these systems expose a paradox: the more AI seeks to simulate or map human emotional life, the more the category of “irrationality” emerges as a racialized artifact—reproduced, amplified, and circulated as data. This dynamic creates a feedback loop where algorithmic misreadings of Black affect generate new training material that further codifies emotional distortion, with profound consequences for ethics, political life, and human—AI interaction.
By analyzing examples of affective misclassification in AI models and the racialized assumptions embedded in emotional computing, this paper contends that the limits of AI’s emotional intelligence reveal not technology failure, but the persistence of sociopolitical hierarchies that determine who is considered emotionally legible, rational, or human within the digital age.
-
Understanding Beyond Reason. Feeling AI through Affective, Embodied, Material Interactions.
By Goda Klumbytė (Kassel)
Conventional discourses in AI research and policy position explainability and transparency as the primary routes through which algorithmic systems become knowable and understandable. Yet such framings privilege rationalist modes of understanding and overshadow the affective, embodied, and material encounters, through which users apprehend AI in practice, and the modalities through which the operations of AI make themselves affectively felt, sometimes without making themselves evident to the conscious mind. This paper rethinks understanding and explainability in AI through feminist and new materialist perspectives, proposing an emergent, process-oriented approach that foregrounds feeling, sensing, and being “in the midst” of human-AI interaction.
Drawing on tangible and embodied interaction design, I present two examples – collaborative research and resulting artwork that engages affective and felt modalities of AI, and the process of exploring tangible explanation modalities for large language models in a workshop setting – to illustrate how materiality and embodiment operate as agents in the emergence of understanding. By troubling the disciplinary urge for knowability and understanding and/as transparency, I argue that ethical interactions with AI would benefit from more embodied and material approaches, foregrounding material practices of care in design that expands AI evaluation beyond reason toward modes of affective and sensory encounter.