Panel 1: Individual, Social, and Global Transformation
-
Dynamic Trust in AI. Intersecting Social Structures, Interpersonal Dynamics, and Adaptive Design.
By Samuel Hill (DFKI)
Trust in AI systems is multifaceted, shaped by social structures - such as regulation and governance - and interpersonal dynamics between humans and AI. It is reciprocal, requiring both a trustor and a trustee, and evolves through interaction. In collaboration between academia (DFKI) and industry (Bundesdruckerei), our research examines how these forms of trust intersect and how design can actively mediate this relationship.
Building on our Mapping and Calibrating User Trust with LLMs project, we developed a Trust Framework comprising 14 dimensions across 5 areas, informed by literature and qualitative interviews. This framework serves as a foundation for investigating how trust can be dynamically mapped and calibrated through interface design. Trust is not static; it fluctuates with context, emotion, and perceived risk – with risk being a critical factor influencing confidence and decision-making. We focus on adaptive mechanisms that respond to contextual cues, emotional signals, and interaction patterns.
A central challenge as well as an opportunity is learning how to measure trust in real time. Our approach combines linguistic and cultural cues, sentiment analysis, and interaction data to inform dynamic calibration strategies. Embedding these insights into UI/UX design enables systems to maintain user confidence and agency under changing conditions.
In this presentation, we share empirical findings and outline next steps: developing actionable metrics for trust and risk measurement, designing trust-sensitive interfaces, and fostering interdisciplinary collaboration to bridge theoretical rigor with practical implementation. We also revisit our Trust Framework to identify which dimensions are essential for ethical AI deployment and which can be flexibly compensated when others degrade. This work advances a vision of desirable AI where trust is continuously negotiated, ethically grounded, and supported by adaptive design principles.
-
Creating Body Diversity in the Age of Generative AI.
By Aisha Tobey (LCFI Cambridge)
The focus on probabilistically likely outcomes in generative AI systems means that AI has been likened to a Mirror, reflecting societal biases and acting as a “disclosing agent” for assumptions about humanness (see, for example, Suchman, 2006 and Vallor 2024). GenAI image models specifically have been critiqued for their biased datasets and processes of embedding bias, resulting in outputs that can be generic and stereotypical. As the political turn in aesthetics suggests, power “revolve[s] around what is seen and what can be said about it, around who has the ability to see and the talent to speak” (Rancière, 2004, p13). If aesthetics are taken as mediatory (Eagleton, 2012) for what is understood as self-evident, then the outputs of genAI models offer an important space for inquiry as these models are speaking for us. This paper specifically investigates what is being communicated about fat and disabled bodies, building on the work that has highlighted the racialised and gendered production of humanness through genAI images as a different and co-constitutive aspect of body normativity. Fat Studies scholars highlight the explicit harm that comes from stereotyped representations of fat bodies (Eaton, 2016; Snider, 2018). Concurrently, Disability studies emphasise the benefits of ‘staring’ at people who are different to expand and challenge our assumptions about others (Garland-Thompson, 2009). Through the interplay of silencing and giving voice, generative image models have the potential to shape individual, societal, and global understanding of what it means to have an be acceptable body. Consequently, the paper questions the social power embedded in generative AI and the role of the practitioner in mediating the agglomerative and marginalising properties of this socio-technical machine.
-
When Care gets Coded.
By Sneha Nair (London/Mumbai)
This paper investigates how digital infrastructures in the Global South transform practices of care, emotion, and bodily autonomy into data-driven systems—revealing how the “irrational” dimensions of human life are increasingly coded into algorithmic rationalities. Focusing on low-tech sexual and reproductive health (SRH) interventions such as SMS-based services, chatbots, and mobile health apps, it argues that these platforms operate as affective infrastructures: early forms of affective AI that mediate and commodify care, intimacy, and ethical decision-making.
The study adopts a conceptual and discourse-analytic approach, drawing on feminist technoscience, postcolonial theory, and reproductive justice frameworks. It analyzes platform materials and reports from two key contexts—Aponjon in Bangladesh and encrypted feminist care networks in Latin America—to examine how users engage, reinterpret, and resist algorithmic structures. Rather than treating these systems as neutral tools, the paper reads them as sites of infra-political resistance (James Scott), where marginalized users tactically subvert data extraction and surveillance to sustain collective forms of care under conditions of infrastructural neglect.
By exploring how emotion and care become computationally legible, this paper connects feminist analyses of digital health to the broader question of irrationality in the age of AI. It asks what happens when affect and ethical ambiguity—traits long coded as “irrational” or “human”—are formalized within machine logics. In doing so, it reframes care as both a target and a tactic of resistance within digital rationality. The talk thus contributes to interdisciplinary debates on AI, ethics, and language by highlighting how feminist users from the Majority World expose the limits of rationalist AI models and reassert the emotional and relational dimensions of human expression.
-
Technosignatures as Pollution. Dominance, Irrationality, and Unsustainable Technology.
By Chelsea Haramia (CST Bonn)
The prospect of utilizing AI for communication with extraterrestrial others has been part of the conversation around digital messaging since the beginning of contemporary scientific SETI searches. As early as 1971, computing pioneer Marvin Minsky proposed sending AI-equipped computers to orbit extraterrestrial planets inhabited by aliens with whom we might want to communicate. As recently as last year, scientists proposed utilizing LLMs combined with other current and prospective technologies to solve the alleged problem of interstellar communication across vast distances. Unsurprisingly, human cosmic reasoning has also invited speculation that aliens will use AI to facilitate communicative signaling. This prospect—one of multiple, potentially communicative, AI-capable species who are extraterrestrial to one another—is distinct to messaging debates and raises ethical and epistemological questions that are dormant in traditional SETI debates. This paper explores the logic of arguments for AI-enabled extraterrestrial communication, dividing the relevant reasoning between pre- and post-detection scenarios and taking seriously the potential extreme longevity of human technology. Depending on the circumstances, technology built for spontaneous communication with others could outlast not only the humans who develop it but also the human species. Yet, the current state of many contemporary AI systems sets them apart from other technology that stands to both outlast and represent humanity writ large, generating unique debates about the future of human expression arising at the cross-section of extraterrestrial communication and contemporary AI technology.