How People Ethically Evaluate Facial Analysis AI: A cross-cultural study in Japan, Argentina, Kenya, and the United States

Abstract: In computer vision AI ethics, a key challenge is to determine how digital systems should classify human faces in images. Across different fields, there has been considerable scholarly debate about normative guidelines that inform policy-making for facial classification. In our previous work, we have applied an experimental philosophy approach to investigate how non-experts and experts in AI deliberate about the validity of AI-based facial classifications. Our analysis of 30,000 written justifications using the transformer-based language model roBERTa quantified the normative complexity behind classifying human faces. Experts and non-experts found some AI facial classifications morally permissible and others objectionable. We also found justificatory pitfalls that legitimized invalid facial AI classifications. These justifications reflected an over-confidence in AI capabilities, while others appealed to narratives of bias-free technological decision-making or cited the pragmatic benefits of facial analysis for specific decision-making contexts such as advertisement or hiring. Thus, contrary to popular justifications for facial classification technologies, these results suggest that there is no such thing as a “common sense” facial classification that accords simply with a general, homogeneous “human intuition.” However, cross-cultural perspectives have been missing entirely in this debate. In ongoing work, we add such missing cross-cultural perspectives working with collaborators in Japan, Argentina, and Kenya to extend this research project to an analysis of non-experts’ justifications of facial AI classification in these countries. We are curious to understand whether there are cultural commonalities and differences in the ethical evaluation of facial AI classifications. At the Desirable AI conference, we would present the quantitative and qualitative results of our cross-cultural study in Japan, Argentina, Kenya, and the US. This research supports critical policy-making by documenting cross-cultural perceptions and judgments of computer vision AI classification projects with the goal of developing ethical digital systems that work in the public’s interest.

Author bios: Severin Engelmann With a background in philosophy of technology and computer science, Severin is an ethicist focusing on the ethics of digital platforms and systems. Currently, he studies how non-experts in AI ethically evaluate AI inference-making across computer vision decision-making scenarios. In this research project, he also investigates whether and to what extent participatory approaches to AI ethics help advance the ethical governance of algorithmic systems.

Chiara Ullstein Chiara Ullstein is a Ph.D. student at the Chair of Cyber Trust. With a background in Politics and Technology, Chiara's research explores public participation in the development and regulation of AI applications. Chiara applies both qualitative and quantitative research methods.

[Please note that the authors of this presentation requested not to be recorded]

#InterculturalApproaches #FacialRecognition #Designers/Developers #Japan #Argentina #Kenya #US

Next
Next

Imagining AI and a Prospective Metaverse: A Participatory Speculative Design Case Study from Japan and Reflections from Germany