Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
vincentweisser avatarvincentweisser avatar
Vincent Weisser

@vincentweisser

focused on open/decentralized agi, alignment and scientific progress

vincentweisser.com
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

Outgoing donations

Luthien
$200
2 months ago
Luthien
$200
2 months ago
SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agents
$250
3 months ago
Investigating and informing the public about the trajectory of AI
$250
3 months ago
human intelligence amplification @ Berkeley Genomics Project
$100
3 months ago
Attention-Guided-RL for Human-Like LMs
$100
3 months ago
human intelligence amplification @ Berkeley Genomics Project
$500
3 months ago
AI-Driven Market Alternatives for a post-AGI world
$115
4 months ago
AI-Driven Market Alternatives for a post-AGI world
$100
4 months ago
MATS Program
$200
4 months ago
Lightcone Infrastructure
$100
4 months ago
Next Steps in Developmental Interpretability
$200
4 months ago
10th edition of AI Safety Camp
$200
4 months ago
Biosecurity bootcamp by EffiSciences
$100
4 months ago
SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agents
$200
4 months ago
Investigating and informing the public about the trajectory of AI
$200
4 months ago
Making 52 AI Alignment Video Explainers and Podcasts
$50
over 1 year ago
AI Safety Research Organization Incubator - Pilot Program
$200
over 1 year ago
AI Safety Research Organization Incubator - Pilot Program
$277
over 1 year ago
AI Safety Research Organization Incubator - Pilot Program
$500
over 1 year ago
Scaling Training Process Transparency
$150
over 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$100
over 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$10
over 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$100
over 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$790
over 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$1000
over 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$210
over 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$500
over 1 year ago
Exploring novel research directions in prosaic AI alignment
$200
over 1 year ago
MATS Program
$300
over 1 year ago
MATS Program
$500
over 1 year ago
Empirical research into AI consciousness and moral patienthood
$50
over 1 year ago
Empirical research into AI consciousness and moral patienthood
$70
over 1 year ago
Run five international hackathons on AI safety research
$100
over 1 year ago
Avoiding Incentives for Performative Prediction in AI
$50
over 1 year ago
AI Alignment Research Lab for Africa
$150
over 1 year ago
AI Alignment Research Lab for Africa
$100
over 1 year ago
AI Alignment Research Lab for Africa
$150
over 1 year ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$100
over 1 year ago
Avoiding Incentives for Performative Prediction in AI
$100
over 1 year ago
Discovering latent goals (mechanistic interpretability PhD salary)
$150
over 1 year ago
Introductory resources for Singular Learning Theory
$50
over 1 year ago
Holly Elmore organizing people for a frontier AI moratorium
$100
over 1 year ago
Recreate the cavity-preventing GMO bacteria BCS3-L1 from precursor
$50
over 1 year ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$150
over 1 year ago
Activation vector steering with BCI
$150
over 1 year ago
Avoiding Incentives for Performative Prediction in AI
$50
over 1 year ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$70
over 1 year ago
Alignment Is Hard
$70
over 1 year ago
Introductory resources for Singular Learning Theory
$70
over 1 year ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$100
over 1 year ago
Compute and other expenses for LLM alignment research
$100
over 1 year ago
Optimizing clinical Metagenomics and Far-UVC implementation.
$100
over 1 year ago
Apollo Research: Scale up interpretability & behavioral model evals research
$160
over 1 year ago
Apollo Research: Scale up interpretability & behavioral model evals research
$250
almost 2 years ago
Run five international hackathons on AI safety research
$250
almost 2 years ago
Holly Elmore organizing people for a frontier AI moratorium
$100
almost 2 years ago
Discovering latent goals (mechanistic interpretability PhD salary)
$400
almost 2 years ago
Discovering latent goals (mechanistic interpretability PhD salary)
$40
almost 2 years ago
Scoping Developmental Interpretability
$45
almost 2 years ago
Scoping Developmental Interpretability
$1000
almost 2 years ago
Scoping Developmental Interpretability
$455
almost 2 years ago
Joseph Bloom - Independent AI Safety Research
$250
almost 2 years ago
Joseph Bloom - Independent AI Safety Research
$100
almost 2 years ago
Joseph Bloom - Independent AI Safety Research
$50
almost 2 years ago
Agency and (Dis)Empowerment
$250
almost 2 years ago
Isaak Freeman
$100
almost 2 years ago
Medical Expenses for CHAI PhD Student
$43
almost 2 years ago
Long-Term Future Fund
$50
almost 2 years ago

Comments

Ozempic for Sleep: Research for Safely Reducing Sleep Needs
vincentweisser avatar

Vincent Weisser

4 months ago

Important research project! Isaak, Helena are awesome and assembling a great team that should make progress on it

Cadenza Labs: AI Safety research group working on own interpretability agenda
vincentweisser avatar

Vincent Weisser

over 1 year ago

Awesome work! One of the most exciting areas of alignment in my view!

AI Safety Research Organization Incubator - Pilot Program
vincentweisser avatar

Vincent Weisser

over 1 year ago

Very excited about this effort, think it could have great impact, and personally know Kay and think he has a good chance to deliver on this with his team!

Empowering AI Governance - Grad School Costs Support for Technical AIS Research
vincentweisser avatar

Vincent Weisser

over 1 year ago

is this project still seeking funding or un-related to this one? https://manifund.org/projects/gabriel-mukobi-summer-research

AI Alignment Research Lab for Africa
vincentweisser avatar

Vincent Weisser

over 1 year ago

glad to hear and awesome to see this initiative!

Compute and other expenses for LLM alignment research
vincentweisser avatar

Vincent Weisser

almost 2 years ago

Might be worth keeping it open for more donations if requested?

Transactions

ForDateTypeAmount
Luthien2 months agoproject donation200
Luthien2 months agoproject donation200
Manifund Bank2 months agowithdraw14000
SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agents3 months agoproject donation250
Investigating and informing the public about the trajectory of AI3 months agoproject donation250
human intelligence amplification @ Berkeley Genomics Project3 months agoproject donation100
Attention-Guided-RL for Human-Like LMs3 months agoproject donation100
human intelligence amplification @ Berkeley Genomics Project3 months agoproject donation500
AI-Driven Market Alternatives for a post-AGI world4 months agoproject donation115
AI-Driven Market Alternatives for a post-AGI world4 months agoproject donation100
MATS Program4 months agoproject donation200
Lightcone Infrastructure4 months agoproject donation100
Next Steps in Developmental Interpretability4 months agoproject donation200
10th edition of AI Safety Camp4 months agoproject donation200
Biosecurity bootcamp by EffiSciences4 months agoproject donation100
SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agents4 months agoproject donation200
Investigating and informing the public about the trajectory of AI4 months agoproject donation200
Manifund Bank4 months agodeposit+17015
Making 52 AI Alignment Video Explainers and Podcastsover 1 year agoproject donation50
AI Safety Research Organization Incubator - Pilot Programover 1 year agoproject donation200
AI Safety Research Organization Incubator - Pilot Programover 1 year agoproject donation277
AI Safety Research Organization Incubator - Pilot Programover 1 year agoproject donation500
Scaling Training Process Transparencyover 1 year agoproject donation150
Cadenza Labs: AI Safety research group working on own interpretability agendaover 1 year agoproject donation100
Cadenza Labs: AI Safety research group working on own interpretability agendaover 1 year agoproject donation10
Cadenza Labs: AI Safety research group working on own interpretability agendaover 1 year agoproject donation100
Cadenza Labs: AI Safety research group working on own interpretability agendaover 1 year agoproject donation790
Cadenza Labs: AI Safety research group working on own interpretability agendaover 1 year agoproject donation1000
Cadenza Labs: AI Safety research group working on own interpretability agendaover 1 year agoproject donation210
Cadenza Labs: AI Safety research group working on own interpretability agendaover 1 year agoproject donation500
Manifund Bankover 1 year agodeposit+500
Manifund Bankover 1 year agodeposit+500
Manifund Bankover 1 year agodeposit+1000
Manifund Bankover 1 year agodeposit+1000
Manifund Bankover 1 year agodeposit+300
Exploring novel research directions in prosaic AI alignmentover 1 year agoproject donation200
Manifund Bankover 1 year agodeposit+200
Manifund Bankover 1 year agomana deposit+10
MATS Programover 1 year agoproject donation300
MATS Programover 1 year agoproject donation500
Manifund Bankover 1 year agodeposit+500
Manifund Bankover 1 year agodeposit+300
Empirical research into AI consciousness and moral patienthoodover 1 year agoproject donation50
Empirical research into AI consciousness and moral patienthoodover 1 year agoproject donation70
Run five international hackathons on AI safety researchover 1 year agoproject donation100
Avoiding Incentives for Performative Prediction in AIover 1 year agoproject donation50
Manifund Bankover 1 year agodeposit+200
AI Alignment Research Lab for Africaover 1 year agoproject donation150
AI Alignment Research Lab for Africaover 1 year agoproject donation100
AI Alignment Research Lab for Africaover 1 year agoproject donation150
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 1 year agoproject donation100
Avoiding Incentives for Performative Prediction in AIover 1 year agoproject donation100
Discovering latent goals (mechanistic interpretability PhD salary)over 1 year agoproject donation150
Manifund Bankover 1 year agodeposit+500
Introductory resources for Singular Learning Theoryover 1 year agoproject donation50
Holly Elmore organizing people for a frontier AI moratoriumover 1 year agoproject donation100
Recreate the cavity-preventing GMO bacteria BCS3-L1 from precursor over 1 year agoproject donation50
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 1 year agoproject donation150
Activation vector steering with BCIover 1 year agoproject donation150
Manifund Bankover 1 year agodeposit+500
Avoiding Incentives for Performative Prediction in AIover 1 year agoproject donation50
Manifund Bankover 1 year agodeposit+500
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 1 year agoproject donation70
Alignment Is Hardover 1 year agoproject donation70
Introductory resources for Singular Learning Theoryover 1 year agoproject donation70
Manifund Bankover 1 year agodeposit+500
WhiteBox Research: Training Exclusively for Mechanistic Interpretabilityover 1 year agoproject donation100
Compute and other expenses for LLM alignment researchover 1 year agoproject donation100
Optimizing clinical Metagenomics and Far-UVC implementation.over 1 year agoproject donation100
Apollo Research: Scale up interpretability & behavioral model evals researchover 1 year agoproject donation160
Apollo Research: Scale up interpretability & behavioral model evals researchalmost 2 years agoproject donation250
Run five international hackathons on AI safety researchalmost 2 years agoproject donation250
Holly Elmore organizing people for a frontier AI moratoriumalmost 2 years agoproject donation100
Discovering latent goals (mechanistic interpretability PhD salary)almost 2 years agoproject donation400
Discovering latent goals (mechanistic interpretability PhD salary)almost 2 years agoproject donation40
Scoping Developmental Interpretabilityalmost 2 years agoproject donation45
Scoping Developmental Interpretabilityalmost 2 years agoproject donation1000
Scoping Developmental Interpretabilityalmost 2 years agoproject donation455
Joseph Bloom - Independent AI Safety Researchalmost 2 years agoproject donation250
Joseph Bloom - Independent AI Safety Researchalmost 2 years agoproject donation100
Joseph Bloom - Independent AI Safety Researchalmost 2 years agoproject donation50
Manifund Bankalmost 2 years agodeposit+1000
Agency and (Dis)Empowermentalmost 2 years agoproject donation250
Manifund Bankalmost 2 years agodeposit+2000
<e083e3b0-a131-4eaa-8a83-6a146a196432>almost 2 years agoprofile donation100
Medical Expenses for CHAI PhD Studentalmost 2 years agoproject donation43
<03fac9ff-2eaf-46f3-b556-69bdee303a1f>almost 2 years agoprofile donation50
Manifund Bankalmost 2 years agodeposit+900
Manifund Bankalmost 2 years agodeposit+100