Tidying up conceptual issues in AI ethics debates.
Current debates about 'AI ethics' are rife with conceptual ambiguities and inconsistencies. Drawing on my training in analytic philosophy, I've been working to sharpen our understanding of the basic concepts we employ when discussing the ethics and politics of AI.
In this paper (ACM FaccT'23), I argue that that welfarism can help us theorize ‘AI transparency’ as a broader moral and political ideal about how we ought to relate to powerful technologies that make decisions about us, rather than simply a demand to look at their innards.
In this paper (Minds & Machines, 2023), Zach Tan and I argue that popular calls for 'explainable' and 'trusted' AI-based decision-support tools are in tension with one another, because they require decision-makers to simultaneously adopt incompatible attitudes towards their AI systems.
In this paper (Philosophy & Technology, 2022), David De Cremer and I argue that the algorithms that search engines use to curate and order content can be seen as providers of a kind of testimony, and ought to be treated as such.
In a conference paper for 4S 2021, Andreas Deppeler and I argued that the 'future of work' discourse, for all its revolutionary promises, re-legitimizes certain key norms from the present-day organization of work – about which kinds of work are/aren’t valuable, which kinds of organizational hierarchies are justified, and whose work will be ‘disrupted’ vs. ‘liberated’ by AI.
Understanding perceptual and behavioral aspects of AI fairness and trustworthiness.
Calls for AI 'fairness' and 'trustworthiness' often operationalize these concepts in reductive, technosolutionistic ways. Drawing on methods in social psychology and HCI, I've been working to understand how people form beliefs about fairness and trustworthiness through their interactions with AI.
In this paper (International Journal of Human-Computer Interaction, 2023), my colleagues and I review and categorize existing empirical literature on perceptions of AI fairness, and sketch future directions for research on this topic.
I'm currently running a series of studies, with Shane Schweitzer and David De Cremer, to explore how third-party certifications of the fairness of AI systems affect their perceived trustworthiness.
In a paper currently under revision (lead authored by Jack McGuire), I explore how employees perceive the fairness of automated performance evaluations, and how the actions of intermediating managers can influence these perceptions.
In commentaries for AI & Ethics (2021), Nature Reviews Psychology (2022), and Frontiers in Artificial Intelligence (2023), my colleagues and I discuss the limitations of current technosolutionistic approaches to issues of AI ethics, and call for increased attention to affective and perceptual aspects of these issues.
Exploring how ethical issues emerge upstream in AI development work.
My research motivations trace, in large part, to my own experiences as an engineering student grappling with the ethical and social aspects of my work. In turn, I have come to be particularly interested in studying how ethical issues are raised and addressed at the point at which technologies are produced.
I'm currently running a series of surveys, with Jack McGuire, David De Cremer, and Kwong Chan, to explore how AI developers and other types of data workers think about biases and cognitive blindspots in their work, and how these, in their view, relate to potential biases and flaws in the AI systems they develop.
In this book chapter (Konrad Adenauer Stiftung, 2021), Zach Tan and I explore how Singaporean organizations were struggling to implement organizational structures and policies to address (what they perceived to be) the 'moving target of AI ethics', and how they were using (and resisting) government guidelines for implementing ethical principles into practice.