Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
0

Advancing AI Safety: A Structured Path to Impact

Science & technologyTechnical AI safety
Lara-Nguyen avatar

Lara Nguyen

Not fundedGrant
$0raised

Project summary

AI's exponential advancement demands urgent safety work. With your funding, I will transition to full-time AI Safety contribution through:

  • Technical training via AI Safety ANZ's structured TARA program

  • Self-directed study in machine learning, causal inference, and alignment research

  • Development of practical safety projects and technical portfolio

This grant enables my dedication to acquiring the technical skills needed to help steer AI toward beneficial outcomes.

 

What are this project's goals? How will you achieve them?

This project will systematically build my technical capabilities in AI Safety through:

Technical Foundation

  • Complete Deep Learning Specialisation and AI Safety Fundamentals courses

  • Develop machine learning expertise

  • Master core alignment methodologies

Field Experience

  • Contribute to open-source AI safety projects

  • Develop proof-of-concept safety implementations

Success Metrics

  • Technical portfolio demonstrating AI safety implementations

  • Contributions to safety documentation or research

  • Secured position in AI safety research/engineering

How will this funding be used?

This funding will support my dedicated transition by covering:

·       Education: $7236 for technical courses, study materials, and online subscriptions

·       Technology: $1273 for computing resources essential for AI research

·       Living Expenses: $11256 for rent and necessities during study

·       Professional Development: $905 for conference attendance and networking events

Each dollar directly enables my full focus on developing technical skills to contribute to AI alignment research.

Who is on your team? What's your track record on similar projects?

Though transitioning solo, I have demonstrated the capability to master technical fields:

·       Self-taught programming skills leading to a professional support engineering role at SAP

·       Completed Machine Learning Specialisation and Responsible AI courses

·       Participant in the TARA program with structured mentorship from field experts

·       Founded a non-profit tech startup, showing entrepreneurial execution

These experiences prove my ability to acquire complex technical skills rapidly and apply them effectively.

What are the most likely causes and outcomes if this project fails?

Primary risks include:

  • Technical knowledge gaps in specialised alignment areas

  • The competitive job market for AI safety positions

  • Funding limitations affecting full-time dedication

If unsuccessful, consequences include delayed transition, reduced technical contributions, and missed impact opportunities when safety work is most urgent.

Mitigation strategies include leveraging free resources, seeking partial funding alternatives, and building stronger connections with safety organisations.

How much money have you raised in the last 12 months, and from where?

I have not raised funds in the past 12 months. My transition has progressed through self-funding and participation in programs like TARA. This grant represents the critical next step - enabling my full-time focus on developing technical AI safety skills when they are most urgently needed.

CommentsSimilar5
Yanni-Kyriacos avatar

Yanni Kyriacos

Help launch the Technical Alignment Research Accelerator (TARA)!

3
4
$14.9K raised
McKimJP avatar

McKim Jean-Pierre

TechnoEthos: A Dual Focus on Research and Technical Upskilling for Tech Ethics

Help me, an economic historian from an underrepresented background, develop tech skills and reflect on adv. technologies to pivot to an AI governance career.

Science & technologyAI governanceGlobal catastrophic risks
6
6
$0 raised
CarlosGiudice avatar

Carlos Rafael Giudice

Cash runway while I go through interviews/wait for OpenPhil's grant decision

I've self funded my ramp up for six months and interview/grant processes are taking longer than expected.

Technical AI safetyGlobal catastrophic risks
2
0
$0 raised
🥥

Alex Lintz

Funding for AI safety comms strategy & career transition support

Mostly retroactive funding for prior work on AI safety comms strategy as well as career transition support. 

AI governanceLong-Term Future FundGlobal catastrophic risks
4
5
$39K raised
adityaraj avatar

AI Safety India

Fundamentals of Safe AI - Practical Track (Open Globally)

Bridging Theory to Practice: A 10-week program building AI safety skills through hands-on application

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 raised