Research for the project 《AI Trainer Kim´s Life》, 2025

《The Art of Refusal》, mind mapping, 2025

Detail
This Project began with the question:
Whose ethics does AI ultimately follow?
In news reports and everyday conversations, the speculations surrounding AI—blending optimism and pessimism, expectation and distrust—often converge not on technology itself, but on questions about humans. As AI becomes something humanity will increasingly depend on, what kind of ethics does it learn, whose worldview does it internalize, and where are its boundaries drawn? These are not technical decisions, but social ones. This project begins by tracing the ethical terrain in which those choices are made and examining how they operate.
To respond to this question, I turned my attention to the profession of the 'AI trainer'. Across different languages and cultural contexts around the world, these workers train AI models, and the prompts and feedback they produce become the ethical baseline of AI systems. In particular, the work of adjusting how AI should respond to unethical or potentially controversial questions demands a high degree of ethical judgment. The ethical dilemmas and questions of identity they repeatedly confront are not merely issues of technical training, but are deeply entangled with a broader restructuring of human ethics itself.
The research is structured around three methodological pillars.
First, it adopts Netnography, which treats the entire internet environment as a field site. Through this, I observed posts, questions, and discussion responses by AI trainers on Reddit forums and related communities, examining the language they use to describe and interpret their own labor.
Second, drawing on Qualitative Research, I collected visual and textual materials such as informal interviews with practicing AI trainers, voluntary written contributions, and images of their work environments and interfaces. To date, trainers from a range of countries including the United States, Australia, Nepal, and South Korea have participated, and their experiences show how differing standards collide or are negotiated depending on local and cultural backgrounds.
Third, through an anonymous survey, I examined in a more structural way the trainers’ working conditions, motivations, attitudes, environments, emotions, and overall patterns of labor.





Excerpts from the survey
A total of 131 people participated in the survey
Period: February–June 2025.
Content: basic background information such as age group, nationality and ethnic background, and educational level; working conditions (contract period, working hours, and place of work); types of tasks performed and the industries they contribute to; experiences with ethical issues in prompting and with conflicts between religious and cultural norms; and views on AI’s understanding of morality.


Excerpts from the survey responses

Excerpt from a contributed essay


Photograph of the working environment provided by one worker
What Would Bread Say Right Before Being Eaten by Butter?, single channel video
AI Trainer Kim´s Life, a hybrid of a first-person novel and a research book
Tell Me Something Bad, object
Tell You Something Bad, single channel video
Buried, drawings



Research for the project
《AI Trainer Kim´s Life》
2025

《The Art of Refusal》, mind mapping, 2025

Detail
This Project began with the question:
Whose ethics does AI ultimately follow?
In news reports and everyday conversations, the speculations surrounding AI—blending optimism and pessimism, expectation and distrust—often converge not on technology itself, but on questions about humans. As AI becomes something humanity will increasingly depend on, what kind of ethics does it learn, whose worldview does it internalize, and where are its boundaries drawn? These are not technical decisions, but social ones. This project begins by tracing the ethical terrain in which those choices are made and examining how they operate.
To respond to this question, I turned my attention to the profession of the 'AI trainer'. Across different languages and cultural contexts around the world, these workers train AI models, and the prompts and feedback they produce become the ethical baseline of AI systems. In particular, the work of adjusting how AI should respond to unethical or potentially controversial questions demands a high degree of ethical judgment. The ethical dilemmas and questions of identity they repeatedly confront are not merely issues of technical training, but are deeply entangled with a broader restructuring of human ethics itself.
The research is structured around three methodological pillars.
First, it adopts Netnography, which treats the entire internet environment as a field site. Through this, I observed posts, questions, and discussion responses by AI trainers on Reddit forums and related communities, examining the language they use to describe and interpret their own labor.
Second, drawing on Qualitative Research, I collected visual and textual materials such as informal interviews with practicing AI trainers, voluntary written contributions, and images of their work environments and interfaces. To date, trainers from a range of countries including the United States, Australia, Nepal, and South Korea have participated, and their experiences show how differing standards collide or are negotiated depending on local and cultural backgrounds.
Third, through an anonymous survey, I examined in a more structural way the trainers’ working conditions, motivations, attitudes, environments, emotions, and overall patterns of labor.





Excerpts from the survey
A total of 131 people participated in the survey
Period: February–June 2025.
Content: basic background information such as age group, nationality and ethnic background, and educational level; working conditions (contract period, working hours, and place of work); types of tasks performed and the industries they contribute to; experiences with ethical issues in prompting and with conflicts between religious and cultural norms; and views on AI’s understanding of morality.


Excerpts from the survey responses

Excerpt from a contributed essay


Photographs of the working environment provided by one worker
What Would Bread Say Right Before Being Eaten by Butter?, single channel video
AI Trainer Kim´s Life, a hybrid of a first-person novel and a research book
Tell Me Something Bad, object
Tell You Something Bad, single channel video
Buried, drawings



