Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
3

Groundless Alignment Residency 2025

Science & technologyTechnical AI safetyEA communityGlobal catastrophic risks
adityaarpitha avatar

Aditya Arpitha Prasad

ProposalGrant
Closes October 27th, 2025
$0raised
$500minimum funding
$30,000funding goal

Offer to donate

14 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary


We are organising an in-person residency program for the Autostructures Fellows to live and work together from the same location (likely in India).

The fellows continued working on the Live Theory agenda beyond their time working on Autostructures in AISC10: Jan - Apr, 2025. Follow this link to watch the MAISU presentations.

Concretely this looks like further developing the three interfaces,

1) Live Conversational Threads (LCT) - Aditya Adiga is building an interface that lets you navigate an ongoing conversation in real time, noting which tangents are being created, so that we can export insights in appropriate ways.
2) Vibe Decoding - Jayson Amati is building this lens to assist in fine grained discernment. Scaling the existing sensitivity we have towards AI slop, that which is only superficially relevant but tricks us.
3) Soloware Platform - Aayush Kucheria and Kuil Schoneveld are building a platform for sharing views on text. This means recipes for your software agent to steal inspiration from your favorite UI/UX designers making sure credit is assigned appropriately

You can watch the latest demos of these interfaces here (request for access)

You can our telegram channel and on substack for updates.

What are this project's goals? How will you achieve them?

The goal is of this project is integrate the three ongoing projects we have and practice noticing if we are able to cultivate the right relationship to the underlying AI models. As Abram Demski put it,

> The alignment target is a particular relationship between humans and AI. This cannot be engineered at a distance. A relationship has to be pursued close up.

We will be polishing the current prototypes into polished products that can be used by the community.

Find out more in our website groundless.ai

How will this funding be used?


This will be a 3 to 4 week residency program, so the money will be used for the accommodation of the participants, food, reimbursement of travel, visa costs, salary for cook, logistics staff.

You can find the budget break down here.

Who is on your team? What's your track record on similar projects?


Aditya A Prasad worked closely with Sahil on AI Safety Workshop @ EA Hotel which was a great success. Harshit has worked with EA organizations like Fish Welfare Initiative in India and has experience handling logistics.

What are the most likely causes and outcomes if this project fails?


The funding may not be sufficient to get all the fellows, to create a safe hygienic container that allows for focused work. Missing people might leave gaps in the integrity of the infrastructure we are building.

How much money have you raised in the last 12 months, and from where?

None so far.

CommentsOffersSimilar8
🐝

Sahil

[AI Safety Workshop @ EA Hotel] Autostructures

Scaling meaning without fixed structure (...dynamically generating it instead.)

3
7
$8.55K raised
adityaraj avatar

AI Safety India

Fundamentals of Safe AI - Practical Track (Open Globally)

Bridging Theory to Practice: A 10-week program building AI safety skills through hands-on application

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 raised
Siao-Si-Looi avatar

Siao Si Looi

Building and maintaining the Alignment Ecosystem

12 months funding for 3 people to work full-time on projects supporting AI safety efforts

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
8
2
$0 raised
Dhruv712 avatar

Dhruv Sumathi

AI For Humans Workshop and Hackathon at Edge Esmeralda

Talks and a hackathon on AI safety, d/acc, and how to empower humans in a post-AGI world.

Science & technologyTechnical AI safetyAI governanceBiosecurityGlobal catastrophic risks
1
0
$0 raised
is-sky-a-sea avatar

Aditya Raj

6-month research funding to challenge current AI safety methods

Current LLM safety methods—treat harmful knowledge as removable chunks. This is controlling a model and it does not work.

Technical AI safetyGlobal catastrophic risks
2
0
$0 / $16.5K
redJ avatar

Jared Mantell

Augmentation Lab 2025: Prototyping Human-Aligned Futures

A 10-week Harvard/MIT residency exploring human augmentation via 'Rhizome Futurism' to build an interconnected, beneficial future.

Science & technology
1
0
$0 raised
JaesonB avatar

Jaeson Booker

Jaeson's Independent Alignment Research and work on Accelerating Alignment

Collective intelligence systems, Mechanism Design, and Accelerating Alignment

Technical AI safety
2
0
$0 raised
KabirKumar avatar

Kabir Kumar

AI-Plans.com

Alignment Research Platform

3
1
$0 raised