Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
3

General support for research activities

AI governanceGlobal catastrophic risks
AmritanshuPrasad avatar

Amritanshu Prasad

ProposalGrant
Closes June 27th, 2025
$0raised
$500minimum funding
$15,000funding goal

Offer to donate

25 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

My name is Amritanshu Prasad, and I am a research engineer and policy analyst working at the intersection of AI safety, AI governance, and global stability. This funding request is to support my ongoing and new work in these critical areas. Specifically, I am seeking personal funding upto $15,000 for the next 6 months in order to help compensate me for my time on several largely unpaid research publications, allow me to dedicate resources to promising projects I am not currently remunerated for, and cover some initial startup costs for a new AI safety evaluations organization I am founding called Suav Tech. Projects that I'm currently working on include co-authoring publications on "Opportunities and Risks from Advanced AI in Nuclear Verification" and "The Nuclear Analogy in AI Governance Research", developing research on coup risk prediction using Machine Learning, and launching Suav Tech. This is a personal grant to support my independent work and initial efforts for Suav Tech, not a direct investment in Suav Tech as a fully formed entity.

What are this project's goals? How will you achieve them?

My primary goals for this funding period are:

  1. Complete and Publish Key Research:

  • Goal: Finalize and submit publication drafts for "Opportunities and Risks from Advanced AI in Nuclear Verification" and "The Nuclear Analogy in AI Governance Research". These publications address critical gaps in understanding AI's impact on nuclear stability and the lessons from nuclear history for AI governance, and will be published as chapters in books focused on nuclear verification and global governance of AI respectively. In addition, I will also complete a research project on predicting coup risks with ML and submit and present the same at the Crisis Computing workshop during the ITU AI for Good Summit.

  • How: This funding will allow me to dedicate the necessary focused time to complete the final revisions, engage with co-authors, and navigate the submission and peer-review process for relevant journals or policy forums. A portion of the funds will also cover travel expenses to Geneva for the crisis computing workshop.

  1. Initiate Suav Tech Operations:

  • Goal: Launch Suav Tech, my new AI safety evaluations organization, by covering essential initial startup costs.

  • How: A portion of the funds will be used for practical necessities such as legal and administrative setup costs.

Achieving these goals will allow me to contribute tangible research outputs to the AI safety and governance fields and lay the groundwork for Suav Tech to become a valuable contributor to proactive risk mitigation.

How will this funding be used?

Most (approx 80%) of this funding meant to be used for personal expenses in order to free up time for me to work on these projects. The remaining 20% (upto $1.5k) is likely to be used for legal and admin fees for setting up Suav Tech. Additional funds will likely go into supporting travel for conferences, adding compute for personal research and further expenses on productivity tools. The intention is to not use any of these funds for my evals org. If you are interested in funding Suav Tech, we will soon add a dedicated project here to raise funds for Suav Tech.

What's your track record on similar projects?

  • Current Roles:

    • Research Engineer at Equistamp: I engage in AI safety evaluation tasks, including novel task development, capability assessment, and human baselining for LLMs, working with clients like the UK AI Safety Institute (AISI). This involves developing and refining evaluation methodologies for advanced AI models.

    • Member of Working Group 8 at Alva Myrdal Centre for Nuclear Disarmament: I collaborate on policy projects concerning international AI governance, with a focus on nuclear governance, contributing technical AI and AI policy insights to reports and publications.

  • Past Experience:

    • Model Interaction Contractor at METR: I reviewed LLM agent transcripts, identified failure modes, implemented human baselines, and applied advanced prompting techniques like red-teaming to assess model safety and capabilities.

    • Course Facilitator for AI Governance (AI Safety Fundamentals): I led cohorts of graduate students, policymakers, and ML engineers through discussions on AI policy, ethics, and governance frameworks.

  • Publications and Projects:

    • My project "Lessons for AI Governance from Atoms for Peace" (co-authored with Dr. Sophia Hatz) analyzed parallels between nuclear history and AI governance.

    • A report on "Landscape Analysis of AI Governance in India" I co-authored was awarded 3rd Prize at Future Academy v2.

What are the most likely causes and outcomes if this project fails?

While complete failure is highly unlikely as two of these projects are at a relatively advanced stage, my contributions might end up being significantly lower in quantity and quality if I end up not being able to find the time to put in sufficient time or facing other constraints. While there are no clear direct negatives, I expect that it would be a slight setback towards any further work in this area.

Comments3Offers
AmritanshuPrasad avatar

Amritanshu Prasad

3 days ago

If you would like to fund the AI safety evals org I am starting, please donate at Suav Tech, an AI Safety evals for-profit | Manifund

🐯

Neevan Sharma

3 days ago

I know him well - he’s experienced, reliable, and doing meaningful work in AI policy, ethics, and governance.

Definitely a project worth funding

sushantshah-dev avatar

Sushant Shah

3 days ago

I know Mr. Amritanshu and he's one of the more highly active people in the AI Ethics field that I know of.