Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
2

Jaeson's Independent Alignment Research and work on Accelerating Alignment

Technical AI safety
JaesonB avatar

Jaeson Booker

Not fundedGrant
$0raised

Project summary

Alignment Research: continue my research into collective intelligence systems for alignment and mechanism design for AI Safety.

First write-up here: https://www.lesswrong.com/posts/2SCSpN7BRoGhhwsjg/using-consensus-mechanisms-as-an-approach-to-alignment

Mechanism design for AI Safety: https://www.lesswrong.com/posts/4NScyGegfL7Dv4u7G/mechanism-design-for-ai-safety-reading-group-curriculum

Current rudimentary forms of collective intelligence networks: https://drive.google.com/file/d/1VnsobL6lIAAqcA1_Tbm8AYIQscfJV4KU/view

Utilize AI Safety Strategy to further Accelerate Alignment. Support other's alignment endeavors, onboarding, mentoring, and connecting people with relevant research and organizations. Funds would be used only to accelerate alignment, not to lengthen timelines. We are growing in numbers, and I have many ideas for how this ecosystem can be developed further. We recently funded a prize pool hosted by AI Safety Plans, and have many more ideas in mind for the future. With AI Safety Support gone, we need new organizations to fill the void and provide the assistance aspiring Alignment researchers need.

Discord group: https://discord.gg/e8mAzRBA6y

Website: https://ai-safety-strategy.org/

What are this project's goals and how will you achieve them?


The goals are to find novel solutions and new angles for tackling the Alignment problem, further accelerate and onboard new talent into working on alignment, and improve the overall trajectory toward a better future.

How will this funding be used?

1-year's salary: $96,000 USD

Funding prizes and other ideas for accelerating alignment: $25,000 USD
One full-time or several part-time hires for onboarding and mentoring prospective alignment researchers: $100,000 USD

These will (roughly) be funded once the other one is satisfied. Example: if I get enough funding for an annual salary, I will begin funding prizes. If I receive enough to fund prizes, I will bring on new talent into the team.

What's your track record on similar projects?

Organizational: I have founded several tech startup companies, lead several teams, including as Project Manager, been the founding member of several others companies, and completed most of a Masters of Business Administration.

Mechanism Design: I have experience working on Mechanism Design and Consensus Engineering, such as my work at MOAT (creating the first decentralized energy token for the BSV network), Algoracle (worked on the White Paper for the first Oracle Network on Algorand), designing a form of decentralized voting for companies, assisting with incentivizing philanthropy at Project Kelvin, and was a Senior Cybersecurity Analyst where I audited blockchain contracts for security vulnerabilities.

AI Safety: I took the AI Safety Fundamentals (both Technical and Governance) in 2021. I worked on building a simulation for finding cooperation between governments on AIS when staying at the Centre For Enabling EA Learning & Research (CEEALAR). I received a grant from the Centre for Effective Altruism and Effective Ventures to further my self study of Alignment Research. I attended SERI MATS in the Fall, under John Wentworth's online program. I have also read extensively on the topic, and contributed to various discussions and blog posts, one of which won a Superlinear prize.

Other: I also TA'd and helped design the curriculum for the first University blockchain class while in undergrad, and have assisted in mentoring and offering consultation for new people wanting to get into the field.

What are the most likely causes and outcomes if this project fails? (premortem)

That alignment is hard, and getting more people working on the problem doesn't guarantee results.

What other funding are you or your project getting?

I have currently received $1000 for my alignment research.

CommentsSimilar8
JaesonB avatar

Jaeson Booker

Research and engineering multi-agent alignment

Skilling up on interpretability and multi-agent alignment

Technical AI safety
1
0
$0 raised
LawrenceC avatar

Lawrence Chan

Exploring novel research directions in prosaic AI alignment

3 month

Technical AI safety
5
9
$30K raised
alexhb61 avatar

Alexander Bistagne

Alignment Is Hard

Proving Computational Hardness of Verifying Alignment Desirata

Technical AI safety
4
13
$6.07K raised
Siao-Si-Looi avatar

Siao Si Looi

Building and maintaining the Alignment Ecosystem

12 months funding for 3 people to work full-time on projects supporting AI safety efforts

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
8
2
$0 raised
JaesonB avatar

Jaeson Booker

The AI Safety Research Fund

Creating a fund exclusively focused on supporting AI Safety Research

Technical AI safety
1
16
$100 / $100K
JaesonB avatar

Jaeson Booker

Funding to attend AI Conclave

A month long on-sight campus to deeply understand and shape AI

2
2
$0 raised
KabirKumar avatar

Kabir Kumar

AI-Plans.com

Alignment Research Platform

3
1
$0 raised
JacquesThibodeau avatar

Jacques Thibodeau

Jacques Thibodeau - Independent AI Safety Research

3 month salary for AI safety work on deconfusion and technical alignment.

Technical AI safety
6
2
$0 raised