You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Fundamentals of Safe AI - Global Cohort is a transformative, free 10-week online program designed to address a critical gap in AI safety education. While existing introductory courses teach theory, they rarely equip participants with practical skills to apply their knowledge. Our program combines weekly theoretical foundations with immediate hands-on application, enabling participants to:
Gain comprehensive understanding of AI safety risks and mitigation strategies
Develop practical skills through weekly coding exercises and mini-projects
Apply learning through a supervised 2-week capstone project
Join a global community of future AI Safety Researchers and Practitioners
The program serves as Phase 1 of our three-phase AI Safety India initiative, strategically designed to create a pipeline from beginner to researcher:
Phase 1: Fundamentals program (this application)
Phase 2: Offline Research Fellowship (similar to MATS, ARENA in India)
Phase 3: Research Paper Development & Submission
By focusing on both understanding AND application, we're building the next generation of AI safety researchers, engineers, and policy experts who can effectively contribute to ensuring advanced AI systems remain safe and aligned with human values.
Develop AI safety talent pipeline: Enable 100+ participants globally to begin practical research in AI Safety.
Bridge theory-practice gap: Transform theoretical knowledge into applied skills.
Build global community: Create a diverse network of motivated AI safety practitioners across countries.
Increase accessibility: Make high-quality AI safety Practical education available at no cost to participants.
Structured Curriculum:
8 weeks core content + 2 weeks supervised project implementation
Weekly sessions combining theory with immediate practical application
Adapted from Atlas curriculum with enhanced focus on coding exercises and mini-projects
Progressive skill building from foundational concepts to specialized topics
Engagement Model:
Small-group format (5-10 participants per group) ensuring personalized attention
Expert facilitation emphasizing discussion and collaborative problem-solving
Experienced facilitators who have completed the Bluedot courses, Co-operative AI Courses or AI Safety Collab Facilitator and AI Safety Camp Fellows and have excellent hands on coding skills.
Weekly assignments with concrete deliverables and feedback
Dedicated Slack workspace for continuous engagement between sessions
Outreach & Participant Selection:
Strategic partnerships with universities
Chhatrapati Shivaji Maharaj University
National Institute of Technology, Agartala
we are in talk with some Universities for Collaboration
Collaboration with EA University Groups and AI Safety Communities
Targeted outreach to underrepresented regions in AI safety discourse
Selection process prioritizing motivation, potential for contribution, and diversity
Community Building:
Regional meetups where feasible, at IIT Madras & NIT Agartala
Mentorship connections with experienced researchers
Project showcase opportunities
Pathway to Phase 2 Research Fellowship for promising participants
I included the direct link to your detailed budget spreadsheet : https://docs.google.com/spreadsheets/d/18obeRSthIIPchcz5vVgDJXBEEWej9XIE_UXkv9yi-Ew/edit?usp=sharing
This represents the essential baseline needed to run the program:
Core Software Infrastructure ($6,000):
Zoom Pro subscription ($1,500)
Google Meet Business+ ($1,500)
Read.ai Pro for transcription ($2,000)
Slack Pro workspace for ~150 users ($1,000)
Limited Marketing ($100):
Targeted social media advertising
10% Contingency ($610)
At this minimum funding level, all facilitation, advising, and organizational work will be volunteer-based. We can operate effectively at this level, but it places significant burden on our volunteer team.
This represents our target funding level that enables fair compensation and optimal program quality:
All core infrastructure ($6,000) as detailed above
Personnel compensation ($25,000):
Facilitators ($9,000): 3hrs/week × 10 weeks
Advisors ($1,000): 2.5hrs/week × 10 weeks
Program Director ($5,000): 25hrs/week × 10 weeks
Partnerships Lead ($5,000)
Marketing Lead ($5,000)
Marketing ($100)
10% Contingency ($3,110)
At this funding level, we can ensure consistent quality, provide fair compensation for our team's expertise, and maximize participant experience.
This level would enable us to expand our reach, deepen program impact, and develop additional resources:
Ambitious personnel compensation ($50,000)
Infrastructure and marketing as above
10% Contingency ($5,610)
Aditya Raj - Program Director LinkedIn
Founder, Effective Altruism NIT Agartala
Successfully ran multiple cohort-based educational programs:
2 cohorts of "EA Intro Course" (20 participants each)
3-year Book Reading Club (10-15 active members)
QNITA program in collaboration with IBM (~500 participants)
Extensive AI safety education background:
Facilitated in "AI Safety Collab" for AI Alignment Track
Facilitated in "Scaling Altruism"
Completed Courses
AIS Hungary program
Bluedot AI Safety Fundamentals
Bluedot AI Governance Intensive
Bluedot Writing Intensive
Co-operative AI Course AI Safety Asia
Precipice Cohort
EA In-depth Program
Top 30 Rank in Grayswan Jailbreak Red Teaming Hackathon
Sireesha Chavali - Partnerships & Outreach Lead Sireesha Chavali | LinkedIn
Prishita Shukla - Marketing Lead Prishita Shukla | LinkedIn
Evander Hammer(AI Safety Coordinator | ML4Good Bootcamps | AI Safety Collab) - Evander Hammer 🔸 | LinkedIn
Aditya Prasad(AI Safety Researcher | PhD Student at Indian Institute of Science (IISc)) - LinkedIn
Shivam Raval (Interpretability | AI Safety | Physics & AI PhD @ Harvard) - LinkedIn
Collaborative relationship with AI Safety Collab
Confirmed university partnerships
Chhatrapati Shivaji Maharaj University
National Institute of Technology, Agartala
EA community connections for participant outreach
Our team combines academic expertise, educational experience, and a proven track record of delivering high-quality programs. We've successfully built communities around complex topics and consistently demonstrated our ability to translate challenging concepts into accessible learning experiences.
Most Likely Causes of Failure:
1. Not enough good applicants & Facilitators. -> We have got good participants and experienced facilitators who have completed the Bluedot courses, Co-operative AI Courses or AI Safety Collab Facilitator and AI Safety Camp Fellows.
2. Too few funded facilitators. -> If we can get funds for facilitators that will be awesome that enhances the Morales & Seriousness of Program.
3. Team overload. -> we are sort of Overloaded now, but we are getting some people for distributions of tasks.
This is our first dedicated fundraising effort for the Fundamentals of Safe AI - Global Cohort program.
This Manifund request specifically targets the essential operating costs for Phase 1 - our global foundational cohort. All previous activities have been volunteer-driven and self-funded by team members, demonstrating our commitment to this mission even before securing external support.
With Manifund's support, we can establish the foundation of our AI Safety Research that will eventually lead to Global Impact.
There are no bids on this project.