You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
This project will focus on building a harm taxonomy with respect to AI language models in the context of extremely marginalized communities who are currently excluded from the AI safety and ethics ecosystem. Although current research on AI harm taxonomies exists with respect to ethnically marginalized communities as well as queer communities—and predominantly in the Western world—very little focus has been put on communities who are marginalized primarily through their socioeconomic status. We consider such socioeconomic marginalization to be the foundation upon which other forms of oppression intersect and compound, including caste-based discrimination, lack of digital literacy, linguistic barriers, and systemic exclusion from technological decision-making processes. This project will conduct participatory research with urban informal workers, rural agricultural laborers, and slum-dwelling communities to document how AI language models create unique harms when deployed in contexts of extreme poverty and social precarity.
The goals of this project are threefold: (1) to develop a community-driven harm taxonomy that captures AI risks specific to India's socioeconomically marginalized populations; (2) to create culturally-grounded evaluation benchmarks for testing LLMs against these identified harms; and (3) to establish a participatory framework that centers these communities as knowledge producers rather than research subjects. We will achieve these through: Participatory workshops with 100+ participants across three communities—urban informal workers (e.g., domestic workers, street vendors), rural agricultural laborers, and urban slum residents—using vernacular languages and accessible methods. Ethnographic fieldwork spending 6 months embedded within these communities to understand their daily interactions with AI-mediated systems (government welfare apps, job platforms, financial services). Collaborative taxonomy development through iterative co-design sessions where community members identify, categorize, and prioritize AI harms based on their lived experiences. Benchmark dataset creation developing evaluation scenarios grounded in real use cases these communities encounter, such as AI-powered welfare eligibility assessments or agricultural advisory systems. Validation and testing working with community members to test existing LLMs against our benchmarks and document failure modes.
Community compensation: Fair payment for participants' time and expertise (£20 per participant for multi-session engagement across ~100 participants = £2,000)
Field research costs: Workshop venue hire, accommodation and travel for fieldwork (£2,000)
Our team consists of researcher's from King's College London, Department of Informatics. It includes Atmadeep Ghoshal who is a PhD student working Responsible AI for the Global South. Atmadeep has about 3 years of experience of working on topics related to inclusion and cultural representation in AI as well as democratizing AI ethics. He has papers published in conferences such as AIES and CHI. We also have Dr. Martim Brandao on our team who is currently an Assistant Professor at King's and works on civic and responsible AI. His work has been published in top venues such as ICRA and IROS
Most likely causes of failure:
Insufficient funding or resources: Unable to compensate participants adequately or sustain fieldwork duration needed for deep community engagement.
Timeline constraints: Academic calendar or visa limitations preventing extended immersive fieldwork necessary for building authentic relationships
Institutional barriers: Delays in ethical approvals or administrative hurdles that compress the research timeline
External disruptions: Unforeseen circumstances like political instability, public health crises, or policy changes affecting community access
Outcomes if project fails:
Preliminary findings still valuable: Even incomplete taxonomy development would provide novel empirical evidence of AI harms in under-researched contexts
Methodological contributions: Documentation of participatory approaches with socioeconomically marginalized communities would inform future research design
Network building: Relationships with community organizations and local researchers create foundation for subsequent projects
Policy gap persists: AI governance in India continues without grounded understanding of how systems affect most vulnerable populations, perpetuating cycles of technological exclusion
As a PhD student, I am funded through King's College London PhD studentship. For this specific project, no dedicated funding has been raised. Current AISI research assistantship provides stipend but not project-specific resources. This would be the first dedicated funding for this community-focused AI harm taxonomy work.
There are no bids on this project.