AI-ALLY

Who decides who will be excluded, discriminated against, or simply overlooked by the Artificial Intelligence being developed, trained, and standardized today? The current societal goal is to make AI systems safe and trustworthy — but for whom? A “safe space” for some may not be safe for others in a plural and diverse society. If democratic principles are reduced to majority logic, minorities risk being included only as “special projects,” handled through institutionalized or tokenized frameworks.

When AI systems are designed or trained to operate within binary frameworks—such as classifying voices, bodies, or identities as strictly male or female—they reproduce and reinforce restrictive social norms. What may seem like a technical decision becomes a political one, shaping how people and machines alike are categorized. Beyond gender, similar patterns of exclusion persist: who is forgotten, criminalized, or generalized through design bias, lack of participation, or political structures that limit inclusion?

In Europe, the AI Act delegates much of the concrete shaping of “trustworthy AI” to private-sector-driven standardization. But what does this mean for those who are not, or cannot be, part of these processes?

AI-Ally draws from the queer concept of an ally: a position of reflective support that acknowledges its own privilege and seeks to build bridges — not to speak for marginalized voices, but with them. Extending beyond queer contexts, AI-Ally aims to connect dominant AI discourses with the plural voices that remain unheard.

A brainstorming workshop is planned for early 2026 in Bonn (more information to follow).
If you are interested, please contact
Dr. Jurgita Imbrasaite: jurgita.imbrasaite@uni-bonn.de