Panel 4: Artificial Expression in Aesthetic and Epistemic Practice
-
On Expressive Form.
By Marvin Tritschler (Stuttgart)
In my talk, I will analyze the concept of expressive form in order to answer the question whether LLMs can be said to carry out acts of expression.
(1) A common mistake in the philosophy of language has always been to take the form of the assertive proposition to be the general form of all language use. Today, it is usually acknowledged that expression is not only a wider concept as it involves non-assertive speech acts (such as asking a question), but also because it extends to acts that are not directly connected to intentional action or even language use (such as laughing from joy). By consequence, in current debates around emotive AI and AI agents, it is often-times suggested that even though it might be the case that LLMs do not grasp that the strings of signs they produce stand in a relation of truth (or falsity) to facts, it could nevertheless be true that they express themselves. Couldn't something which doesn't have representations or intentions as we do, still express itself, simply by interacting with us in a way that causes an emotional reaction in us?
(2) My negative answer to this question will be based on the argument that LLMs can neither relate to truth nor to meaning. They do not use propositions as they do not consciously carry out speech acts (consider the difference between “A expressed x by using p” and “p expresses x”, cf. Tritschler 2025). But they also do not express themselves as they cannot act out emotions unconsciously (consider “A's laughter expresses her joy”). Generally speaking, the form of expression is located at the intersection of these two paradigm cases, as it is a transformation of something naturally given (an actual datum for thinking) into a determination of the mind (a potential part of a thought). Note that this doesn't require that the one who achieves such a transformation must also consciously represent the determination in question and posit it as such, as can be seen in non-representational expressive acts such as joyful laughter. That is, one can exhibit expressive form unconsciously. Still, this requires that one is in principle able to use expression for thought.
(3) I will end my talk by proposing that this suggests that “irrationality” is an aspect of rationality. E.g., a non-representational feeling is, for human beings, still an expression of their rational nature. And rationality, as such, must be expressed through emotive manifestations of the human mind in some body. Thus, it is indeed wrong to think that quite different kinds of expressive acts (such as laughing, walking, speaking etc.) all fall under one common form. Rather, in expression, these different types of acts constitute together the human realization of what we call “rational”. And as machines cannot achieve such an intertwining of mind and matter, they also are not capable to express themselves and shouldn't be treated as emotionally relevant interaction partners
-
Should LLMs Testify? Situated Emotions, Performativity and AI-as-Witnesses.
By Nora Lindemann (Osnabrück)
In this theoretical contribution, I examine how large language models (LLMs) reshape practices and understandings of witnessing. Traditionally, witnessing is bound to lived and embodied experiences of humans who, give a testimony of what they experienced in a certain situation. This is now changing as AI technologies increasingly become part of witnessing practices.
This shift pertains, on the one hand, to contemporary witnesses of past events, such as LLMs that allow users to “talk” with historical figures like Sophie Scholl, thereby (seemingly) providing interactive and immersive testimonies of the past (Fobizz, n.d.). On the other hand, it also affects present-day events and legal contexts. Recently, in the US, a victim impact statement of a murder victim was delivered through a video using synthetic speech and a deepfake image of the deceased (Duffy 2025).
Drawing on Judith Butler’s theory of performativity and feminist notions of witnessing, I critically examine how performative (speech) acts of witnessing are transformed through the use of AI technologies, particularly LLMs. As Schmidt and Voges (2011) argue, witnessing is always political: trusting someone to be a witness also means ascribing credibility. How, then, does the performative, political practices of witnessing, change when disembodied LLMs are increasingly involved?
In line with the conference theme, I will focus on the affective and irrational dimensions of witnessing and its impact. In the AI-generated witness statement shown in court, the presiding judge described being emotionally moved, despite the video being clearly marked as synthetic (Duffy 2025). At the same time, witnessing can reveal power imbalances by making the emotions of those involved visible (Gillespie, 2016). Therefore, we must attend to the performative and affective dimensions of language: testifying is not merely describing, but also enacting presence, credibility and emotion.
-
The Algorithmic Muse and the Expert Eye. Empirical Evidence of a Perception Gap in the Reception of Synthetic Poetry.
By Eduardo de la Cruz Fernández (Universidad Politécnica de Madrid)
This paper addresses the conference’s core inquiry into the boundaries of machine simulation and human irrationality by asking: Can AI-generated poetry transcend rational mimicry to evoke genuine aesthetic depth? We present findings from an empirical study based on a novel methodology for synthetic creative writing developed at the Polytechnic University of Madrid. Unlike standard prompting approaches, our system utilizes a Retrieval-Augmented Generation (RAG) architecture integrated with formalized academic literary theory to generate poetry in Spanish.
The intervention involved a comparative evaluation of these synthetic texts by two distinct populations: lay readers and literary experts. Participants engaged in blind readings to assess the poems across four key dimensions: global experience, emotional evocation, harmony and musicality, and originality and conceptual depth. The results reveal a striking inversion of preference. Lay readers consistently rated the AI-generated poems higher than human works in global experience, favoring their harmonic structure. In contrast, literary experts consistently preferred human poetry, identifying a qualitative deficit in the synthetic texts across all evaluated dimensions.
This paper argues that the “irrational” element of human expression may not be a universal experience, but a depth layer legible primarily through expertise. This divergence suggests that while AI successfully simulates rational poetic structures—appealing to a general desire for accessibility—it currently fails to reproduce the ruptures and original nuance experts recognize as signs of genuinely deep expression, leaving open whether this deficit represents an inherent ontological limit or a temporary technological constraint. We conclude by discussing the ethical implications of this gap: are we moving toward a culture where algorithmic kitsch becomes the default emotional substrate for the majority, while genuine human expression—its recognition, production, and circulation—risks becoming a niche for the elite, or will future advances in language models narrow this divide?