Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
2

AI Safety Los Angeles (AISLA)

Technical AI safetyAI governanceGlobal catastrophic risks
los_angeleno1176 avatar

Kristina Vaia

ProposalGrant
Closes August 18th, 2025
$2,500raised
$2,500minimum funding
$15,000funding goal

Offer to donate

19 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

AI Safety Los Angeles (AISLA) aims to launch the city’s first open, action oriented AI safety community. Our mission is to connect technical professionals, researchers, policymakers, and those curious about AI to raise awareness of AI safety risks, collaboration opportunities, and support local talent. This initial funding will lay the foundation for a large, vibrant community that will partner with universities, host guest speakers and seminars, and build a strong online and in person presence.

What are this project's goals? How will you achieve them?

  • Establish a flagship AI safety community in Los Angeles, open to all backgrounds and disciplines.

  • Raise awareness and understanding of AI safety risks, best practices, and governance.

  • Foster cross-disciplinary collaboration, mentorship, and support for those interested in AI safety and alignment.

  • Build sustainable partnerships with local universities, research groups, and industry leaders.

    How we'll achieve them:

    • Host monthly or bimonthly meetups, workshops, and panel discussions featuring local and visiting experts.

    • Launch and grow an active online community (Slack/Discord, Twitter/X) for ongoing resource sharing, networking, and event coordination.

    • Partner with universities (UCLA, USC, Caltech, etc.) for joint events, research seminars, and talent development.

    • Organize guest speaker series and seminars to bring in leading voices from the AI safety field.

    • Encourage members to propose and lead their own projects, study groups, or public awareness campaigns.

How will this funding be used?

  • Venue rental and refreshments for regular meetups and workshops.

  • Speaker honorariums and travel support for guest experts.

  • Online community platform costs (Slack, Discord, Circle, Twitter/X).

  • Marketing and outreach (social media ads, event listings).

  • Materials and supplies for workshops and collaborative projects.

  • Stipends for organizers (currently only me, but will source another person fit for the role).

  • Seed funding for member-led mini projects or public awareness initiatives.

  • University partnership activities (joint events, campus outreach).

Who is on your team? What's your track record on similar projects?

Kristina Vaia: Connector, networker, and passionate advocate for AI safety. Excited to build a community around what I care about more than anything: connecting people and making AI safety accessible and actionable in Los Angeles.

Advisors/Collaborators

Currently, there are no formal advisors or collaborators. I am actively seeking to connect with local AI professionals, researchers, and group leaders as the community launches and grows.

Track Record

  • I regularly connect AI professionals and enthusiasts for collaboration and knowledge sharing. I've become to go-to person for starting a career in this space for Carnegie Mellon policy students.

  • I'm an active, heavy user of AI tools and stay up to date with global AI safety communities.

  • I'm committed to growing AISLA into a large, impactful hub through partnerships, high-quality programming, and ongoing engagement.

  • While I haven't yet built or managed professional networks or online communities, I'm really eager to learn and leverage my strengths in networking and community building to make AISLA a success.

What are the most likely causes and outcomes if this project fails?
Most Likely Causes:

  • Insufficient member engagement or event attendance.

  • Difficulty securing venues, speakers, or university partnerships.

  • Overlap or lack of coordination with existing LA tech/AI groups.

Possible Outcomes:

  • The group remains small or inactive, with limited impact.

  • Valuable lessons learned about community building in LA; documentation and resources shared for future organizers.

  • Connections made during initial events may still spark collaborations or future groups, even if the main project pauses.

How much money have you raised in the last 12 months, and from where?

Amount Raised: $0 (This is a new initiative; no prior funding received.) Note: All work to date has been volunteer driven. This application is for seed funding to launch and grow the group.

some additional notes: this funding is just the beginning. My vision is to scale AISLA into a large, inclusive, and sustainable community that serves as a model for other cities. We plan to grow membership, expand partnerships, and increase programming as the group gains traction. Commitment to Impact: we'll track engagement, gather feedback, and transparently report on outcomes to ensure the community delivers real value to LA’s AI ecosystem.

If I Receive the Minimum

With $2,500, I'll

  • Host 2–3 in-person or hybrid meetups at accessible venues, including refreshments.

  • Launch a basic online community (Slack or Discord) for ongoing engagement.

  • Cover essential marketing and outreach (social media, event listings).

  • Lay the groundwork for university partnerships and future growth.

  • Prioritize low-cost, high-impact activities to maximize reach and engagement.

If I Receive the Maximum

With $15,000, I'll

  • Host monthly or bimonthly meetups and workshops, including guest speakers and panel discussions.

  • Build a robust online presence (Slack/Discord, Twitter/X), with regular content and resource sharing.

  • Offer speaker honorariums and travel support to attract high-quality guests.

  • Develop partnerships with local universities for joint events and seminars.

  • Support member-led mini projects and public awareness campaigns.

  • Invest in sustained marketing and community growth.

Comments7Offers1Similar6
joshl avatar

Josh Landes

11 days ago

I have (from afar) co-organized/sponsored the LA event you mentioned (127 registrations, featured in the LA Luma cal, rated 4.9/5). I did some outreach to our community + LA-EA (student) groups (which aren't super active/out of town for the summer) but overall invested no more than 30mins into organizing.

Our main goal with these events is to provide start-up momentum for (hopefully more longterm) local infrastructure. AE Studios and AISAP have been great partners doing the on the ground work (and are both keen to do more). I think having a dedicated local organizer would be great - both to build the talent pipeline but also for outreach to industry/creative/movie/film communities! Seconding @saulmunn's point on more specifics wrt your plans here.

Fyi, we are co-organizing another meet-up on Thursday, August 7 (https://lu.ma/crcyjurj) at AE Studios in Del Rey (currently at 25 registrations). I think it would be great if you could attend to discuss how to scale up LA AIS community building efforts!

los_angeleno1176 avatar

Kristina Vaia

11 days ago

absolutely! let’s connect. I’d love to discuss this more & help set up the event if needed. I just registered ✨

saulmunn avatar

Saul Munn

16 days ago

thanks for writing this up!


(1)

[I] stay up to date with global AI safety communities

could you explain what this concretely means? for example — do you read the Alignment Forum, the EA Forum, or LessWrong? do you read AI-safety-relevant papers? do you attend EAGs, or alignment-related events? do you regularly schedule calls with people in the AI safety community?

and more specifically, how much have you connected with existing AI safety community builders (particularly those in the SF Bay Area, London, and Boston)?


(2)

Currently, there are no ... collaborators.

i would strongly suggest finding some collaborators! makes everything more motivating & fun, and also straightforwardly multiplies how much work you can do.


(3)

All work to date has been volunteer driven.

what work is that? it'd be great to see any work you've done as a volunteer!


(4)

How will this funding be used?

[... full section ...]

it'd be useful if, in this section, you gave a broad breakdown of how you expect to split the money between the different expense categories. additionally, i think you should be a bit more detailed about the stipend you intend to pay yourself/your collaborators. (tbc, it seems totally reasonable & likely the right call to pay yourselves, but having more detail about this would be good.)


(5)

Who is on your team? What's your track record on similar projects?

Kristina Vaia: Connector, networker, and passionate advocate for AI safety. Excited to build a community around what I care about more than anything: connecting people and making AI safety accessible and actionable in Los Angeles.

i'd love to see more concreteness here. some more-specific prompts (but please try to answer the general "lacks specificity" more than over-indexing on these particular questions):

  • what have you worked on in the past? particularly stuff that's AI-safety- or community-building-related, but also just cool/interesting/ambitious things you've done.

    • what's your professional background?

    • what projects have you led or centrally organized in the past?

    • have you ever led a team of people? how did that go?

  • what's your level of knowledge about AI safety?

  • are there people who could talk about your past work (i.e. references)? if so, maybe drop their names?

  • where can we learn more about you (e.g. LinkedIn, personal website, blog, etc)? [note: consider hyperlinking to your LinkedIn in this section.]

  • etc


(6)

what is the current landscape of AI safety work in Los Angeles? to what extent are you plugged into it?

you said in a different comment:

There was an AI Safety Event in Marina del Rey last month. ... AE Studio in Venice, CA is an AI product development company with an active alignment team. ... There is also significant crossover between members of the Effective Altruism LA, LA Rationality, and AI safety communities. ... UCLA hosts an AI safety research club ... USC has an org for AI alignment and safety ...

i'd be keen for more details about (a) your understanding of the AI safety community in LA; (b) the extent to which you're currently plugged in.

some example prompts for concreteness (again, don't index too hard on these exact questions):

  • are you in contact with the university club organizers at USC and UCLA? to what extent?

  • to what extent are you in contact with the university club organizers at USC and UCLA? or to AE studios?

  • have you been to any AI safety events in LA? how many?

  • etc


(7)

Amount Raised: $0

have you applied for grants elsewhere (& in particular, the LTFF)? what's the current status of those applications?

  • if you've applied & heard back, what was the response?

  • if you haven't applied, why not?

los_angeleno1176 avatar

Kristina Vaia

11 days ago

@saulmunn yes absolutely! I’ll provide a full response shortly ~

saulmunn avatar

Saul Munn

1 day ago

@los_angeleno1176 no worries if you need some more time, but would love to hear your thoughts on this — gentle bump!

NeelNanda avatar

Neel Nanda

23 days ago

Do you know of specific people who would be excited about this community? Do you have a sense of specific people you'd reach out to? I think that having a sense of the latent demand would make evaluating how promising this is much easier.

los_angeleno1176 avatar

Kristina Vaia

23 days ago

Yup. There was an AI Safety Event in Marina del Rey last month. Hosted by BlueDot Impact, AI Safety Awareness Project, and AE Studio. Technologists, researchers, and students interested in AI safety participated. AE Studio in Venice, CA is an AI product development company with an active alignment team. The CEO (Judd Rosenblatt) is a well-known figure in the LA tech community and would be a valuable contact. There is also significant crossover between members of the Effective Altruism LA, LA Rationality, and AI safety communities. These groups usually share interests and members, making them great sources. UCLA hosts an AI safety research club focused on the development and impact of advanced AI systems. Reaching out to the club’s leadership and active members can help seed AISLA with more students and researchers. USC has an org for AI alignment and safety and can be contacted as well. There are also a ton of tech companies in LA that have AI teams - SnapChat, Hulu, Google, & Apple.