Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Fundamentals of Safe AI - Global Cohort

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
adityaraj avatar

AI Safety India

ProposalGrant
Closes June 13th, 2025
$0raised
$6,710minimum funding
$34,210funding goal

Offer to donate

35 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

Fundamentals of Safe AI - Global Cohort is a transformative, free 10-week online program designed to address a critical gap in AI safety education. While existing introductory courses teach theory, they rarely equip participants with practical skills to apply their knowledge. Our program combines weekly theoretical foundations with immediate hands-on application, enabling participants to:

  • Gain comprehensive understanding of AI safety risks and mitigation strategies

  • Develop practical skills through weekly coding exercises and mini-projects

  • Apply learning through a supervised 2-week capstone project

  • Join a global community of future AI Safety Researchers and Practitioners

The program serves as Phase 1 of our three-phase AI Safety India initiative, strategically designed to create a pipeline from beginner to researcher:

  • Phase 1: Fundamentals program (this application)

  • Phase 2: Offline Research Fellowship (similar to MATS, ARENA in India)

  • Phase 3: Research Paper Development & Submission

By focusing on both understanding AND application, we're building the next generation of AI safety researchers, engineers, and policy experts who can effectively contribute to ensuring advanced AI systems remain safe and aligned with human values.

What are this project's goals? How will you achieve them?

Primary Goals:

  1. Develop AI safety talent pipeline: Enable 100+ participants globally to begin practical research in AI Safety.

  2. Bridge theory-practice gap: Transform theoretical knowledge into applied skills.

  3. Build global community: Create a diverse network of motivated AI safety practitioners across countries.

  4. Increase accessibility: Make high-quality AI safety Practical education available at no cost to participants.

Implementation Strategy:

Structured Curriculum:

  • 8 weeks core content + 2 weeks supervised project implementation

  • Weekly sessions combining theory with immediate practical application

  • Adapted from Atlas curriculum with enhanced focus on coding exercises and mini-projects

  • Progressive skill building from foundational concepts to specialized topics

Engagement Model:

  • Small-group format (5-10 participants per group) ensuring personalized attention

  • Expert facilitation emphasizing discussion and collaborative problem-solving

    • Experienced facilitators who have completed the Bluedot courses, Co-operative AI Courses or AI Safety Collab Facilitator and AI Safety Camp Fellows and have excellent hands on coding skills.

  • Weekly assignments with concrete deliverables and feedback

  • Dedicated Slack workspace for continuous engagement between sessions

Outreach & Participant Selection:

  • Strategic partnerships with universities

    • Chhatrapati Shivaji Maharaj University

    • National Institute of Technology, Agartala

    • we are in talk with some Universities for Collaboration

  • Collaboration with EA University Groups and AI Safety Communities

  • Targeted outreach to underrepresented regions in AI safety discourse

  • Selection process prioritizing motivation, potential for contribution, and diversity

Community Building:

  • Regional meetups where feasible, at IIT Madras & NIT Agartala

  • Mentorship connections with experienced researchers

  • Project showcase opportunities

  • Pathway to Phase 2 Research Fellowship for promising participants

How will this funding be used?

I included the direct link to your detailed budget spreadsheet : https://docs.google.com/spreadsheets/d/18obeRSthIIPchcz5vVgDJXBEEWej9XIE_UXkv9yi-Ew/edit?usp=sharing

Minimum Funding Request: $6,710

This represents the essential baseline needed to run the program:

  • Core Software Infrastructure ($6,000):

    • Zoom Pro subscription ($1,500)

    • Google Meet Business+ ($1,500)

    • Read.ai Pro for transcription ($2,000)

    • Slack Pro workspace for ~150 users ($1,000)

  • Limited Marketing ($100):

    • Targeted social media advertising

  • 10% Contingency ($610)

At this minimum funding level, all facilitation, advising, and organizational work will be volunteer-based. We can operate effectively at this level, but it places significant burden on our volunteer team.

Expected Funding Request: $34,210

This represents our target funding level that enables fair compensation and optimal program quality:

  • All core infrastructure ($6,000) as detailed above

  • Personnel compensation ($25,000):

    • Facilitators ($9,000): 3hrs/week × 10 weeks

    • Advisors ($1,000): 2.5hrs/week × 10 weeks

    • Program Director ($5,000): 25hrs/week × 10 weeks

    • Partnerships Lead ($5,000)

    • Marketing Lead ($5,000)

  • Marketing ($100)

  • 10% Contingency ($3,110)

At this funding level, we can ensure consistent quality, provide fair compensation for our team's expertise, and maximize participant experience.

Ambitious Funding Request: $61,710

This level would enable us to expand our reach, deepen program impact, and develop additional resources:

  • Ambitious personnel compensation ($50,000)

  • Infrastructure and marketing as above

  • 10% Contingency ($5,610)

Who is on your team? What's your track record on similar projects?

Leadership Team:

Aditya Raj - Program Director LinkedIn

  • Founder, Effective Altruism NIT Agartala

  • Successfully ran multiple cohort-based educational programs:

    • 2 cohorts of "EA Intro Course" (20 participants each)

    • 3-year Book Reading Club (10-15 active members)

    • QNITA program in collaboration with IBM (~500 participants)

  • Extensive AI safety education background:

    • Facilitated in "AI Safety Collab" for AI Alignment Track

    • Facilitated in "Scaling Altruism"

    • Completed Courses

      • AIS Hungary program

      • Bluedot AI Safety Fundamentals

      • Bluedot AI Governance Intensive

      • Bluedot Writing Intensive

      • Co-operative AI Course AI Safety Asia

      • Precipice Cohort

      • EA In-depth Program

    • Top 30 Rank in Grayswan Jailbreak Red Teaming Hackathon

Sireesha Chavali - Partnerships & Outreach Lead Sireesha Chavali | LinkedIn

Prishita Shukla - Marketing Lead Prishita Shukla | LinkedIn

Advisory Team:

  • Evander Hammer(AI Safety Coordinator | ML4Good Bootcamps | AI Safety Collab) - Evander Hammer 🔸 | LinkedIn

  • Aditya Prasad(AI Safety Researcher | PhD Student at Indian Institute of Science (IISc)) - LinkedIn

  • Shivam Raval (Interpretability | AI Safety | Physics & AI PhD @ Harvard) - LinkedIn

Institutional Support:

  • Collaborative relationship with AI Safety Collab

  • Confirmed university partnerships

    • Chhatrapati Shivaji Maharaj University

    • National Institute of Technology, Agartala

  • EA community connections for participant outreach

Our team combines academic expertise, educational experience, and a proven track record of delivering high-quality programs. We've successfully built communities around complex topics and consistently demonstrated our ability to translate challenging concepts into accessible learning experiences.

What are the most likely causes and outcomes if this project fails?

Most Likely Causes of Failure:

1. Not enough good applicants & Facilitators. -> We have got good participants and experienced facilitators who have completed the Bluedot courses, Co-operative AI Courses or AI Safety Collab Facilitator and AI Safety Camp Fellows.

2. Too few funded facilitators. -> If we can get funds for facilitators that will be awesome that enhances the Morales & Seriousness of Program.

3. Team overload. -> we are sort of Overloaded now, but we are getting some people for distributions of tasks.


How much money have you raised in the last 12 months, and from where?

This is our first dedicated fundraising effort for the Fundamentals of Safe AI - Global Cohort program.

  • This Manifund request specifically targets the essential operating costs for Phase 1 - our global foundational cohort. All previous activities have been volunteer-driven and self-funded by team members, demonstrating our commitment to this mission even before securing external support.

    With Manifund's support, we can establish the foundation of our AI Safety Research that will eventually lead to Global Impact.

CommentsOffers

There are no bids on this project.