AI Safety Los Angeles (AISLA)
Manifund Grant Progress Update, March 2026
Kristina Vaia & Nathaniel Burnham
Overview
Six months ago I received a $2,500 seed grant to establish Los Angeles’s first AI safety community. This update is both an honest accounting of what happened and a case for why the next phase deserves continued investment.
What I Tried and What I Learned
The original format was focused on academic discussions on technical papers and the book "If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky and Nate Soares. We held four sessions, all posted on Partiful, and provided a great spot for discussion, food, and refreshments. We eventually learned that it was not working optimally. The format attracted too narrow an audience, heavily rationality adjacent, and less accessible to researchers, engineers, and builders who were not already embedded in EA and LessWrong circles.
The long academic discussion sessions were tiring, not everyone was engaged, and the overall energy wasn’t what I’d hoped for. Los Angeles is not San Francisco. The community here is more diverse, more creative, and requires a different entry point, so I made a deliberate decision to change the model entirely.
Our Pivot: Leading with Community
We shifted to informal, casual dinners as the anchor event format, creating a space for intellectually stimulating discussions on AI safety and whatever else guests wanted to explore. Lower barrier to entry, higher quality conversations. I believe the depth emerges naturally once trust is established, not because it’s overly structured or mandated by an agenda.
What surprised me most is that removing structure did not reduce intellectual depth, it dramatically improved it. Conversations were sharper, more candid, and more interdisciplinary. Instead of performative or overly academic discussion, people engaged in real, high quality exchanges across multiple disciplines. AISLA is a social community and a space for intellectually serious conversation that feels natural rather than forced. That balance is rare and is what makes the format work.
We intentionally allow conversations to move beyond AI safety when it makes sense. Some of the most engaging discussions naturally expand into adjacent areas like physics, systems thinking, or broader questions about technology and society. There is no pressure to stay narrowly on topic. That kind of restriction tends to reduce curiosity and energy. By keeping the environment open, conversations stay more stimulating and often lead back to AI safety with deeper and more original perspectives.
Our most recent dinner in March 2026 validated this completely. Eight people gathered at 1212 Santa Monica, seated around a cozy fire pit. The room included a researcher from RAND, a fellow attending a Stanford fellowship, an AE Studio employee working under CEO Judd Rosenblatt, a machine learning engineer, a senior technologist in cybersecurity, a rationality community member with a strong technical background, and an entrepreneur using AI in her business. The conversation ranged from alignment strategy, governance to real world deployment risks, physics, and career paths in AI safety.
This is the crossdisciplinary mix AISLA was meant to generate, it just took a format change to unlock it.
Team Update
One of the original reviewer concerns was the lack of collaborators. Nathaniel Burnham has joined as co-founder.
Nathaniel leads the Los Angeles rationality community and hosts RAT meetings every Wednesday. His recent meetings have centered on AI safety, and he had been looking for a dedicated outlet for that work. His network includes some of the most prominent figures in alignment research. His involvement has meaningfully strengthened AISLA.
Together we bring complementary strengths. I focus on operations, communications, and building our network and community. Nathaniel brings technical credibility, alignment network depth, built our website, and assists with marketing and outreach.
Engagement with the Broader AI Safety Ecosystem
I stay connected to the broader AI safety field through both content and in-person engagement. I regularly read LessWrong, the Alignment Forum, and the EA Forum. I attend AI safety events in Berkeley on a recurring basis and travel to the Bay Area frequently for networking. My most recent event was this past weekend: Explainable's Frame Fellowship event called "The Conversation," which emphasized the importance of content creators in AI safety. I attend EAG conferences and maintain ongoing conversations with individuals working in AI safety, policy, and adjacent fields. This allows me to bring current ideas and discussions into AISLA while connecting LA-based members to the broader ecosystem.
Public Communications and Growing Profile
A community is only as effective as its ability to reach people. I’ve been building a public communications presence designed to make AI safety accessible to broader audiences.
@asktheaigirl is my personal AI safety education platform across YouTube, Instagram, and TikTok. I originally built it around a structured curriculum based on leading safety materials. Early content validated demand, but I learned that TikTok was not the optimal platform for in-depth safety education. I’m now focusing on YouTube for deeper engagement while continuing to crosspost on other platforms for maximum reach. The roadmap includes educational explainers, personal video essays on AI and society, and interviews with researchers, several of whom have already agreed to participate, including Alex Turner, a researcher at Google DeepMind and MATS mentor. While separate from AISLA, the platform directly supports the community by building public awareness and funneling interested people into the ecosystem.
The AISLA LinkedIn page is building a professional presence across the LA tech and policy ecosystem.
I also applied to Anthropic's Claude Ambassador program as part of positioning myself as a credible independent voice in AI safety education. My goal is to bridge technical AI safety work and broader audiences while directing that attention back into AISLA.
Infrastructure and Online Presence
The AISLA website is live at https://aisafetyla.com/. Our Discord is active for ongoing community engagement. The WhatsApp group "LA AI Safety Community" is used for event blasts and coordination. All events are managed through Partiful and links to past events and attendance are available on request.
Track Record and Relevant Experience
I’ve been working in AI safety for three years now. I’m focused on establishing AISLA from the ground up, evolving it into a foundational community infrastructure. This work included planning four meetups and cultivating a digital network on Discord and WhatsApp. Through my deliberate outreach across technical, policy, and research circles, I’ve begun to connect individuals across the governance and industry landscapes.
In my broader professional work I have experience across AI policy/governance research through roles at EQTY Lab, MINT Lab, and Carnegie Mellon. Aside from research, I also have experience in AI safety operations and recruiting. CMU Heinz College (Public Policy & Information Systems) graduate students are directed to reach out to me for career guidance in AI safety.
LA AI Safety Landscape and My Involvement
Los Angeles has a growing but fragmented AI safety scene. This includes the Marina del Rey AI safety meetup hosted by BlueDot, AISAP, and AE Studio; AE Studio's active alignment team in Venice; UCLA AI safety research groups; USC alignment and AI safety student organizations; and overlap with EA LA and rationality communities. I’ve been building relationships with attendees and organizers, and remain in contact with people across these groups. I have a relationship with AE Studio, and they’ve expressed openness to co-hosting future AISLA events. What’s missing is consistent, centralized infrastructure. Events have been sporadic, communities are fragmented, and there is no stable hub. AISLA is designed to fill this gap.
The Next Phase: Four Pillars
The next six months will build AISLA into structured ecosystem infrastructure across four pillars.
Monthly community dinners will serve as the anchor: casual gatherings that drive retention and community cohesion, focused on real, high quality intellectual conversation rather than rigid agendas.
Discussions, debates, and key paper readings will focus on AI risk, governance, and policy questions in a format designed for genuine engagement rather than endurance.
Workshops and build sessions will allow members to share projects, explore tools, and work together. This shifts AISLA from discussion to action.
The network and talent layer will connect members across labs, policy, research, and industry while surfacing opportunities and functioning as long-term infrastructure for the LA ecosystem.
I’m planning two to four events per month: one monthly dinner plus additional discussions, networking sessions, and build sessions as the community grows.
Funding Request: $13,000
We are requesting $13,000 to fund the next six months of AISLA operations.
Monthly dinners x6 at $600 each: $3,600
Discussion and networking events x12 at $150 each: $1,800
Build sessions x6 at $200 each: $1,200
Marketing and operations: $1,000
Organizer stipends at $450 per month per organizer across six months: $5,400
By the end of this grant period we expect to have hosted 18 to 24 events, grown the active AISLA community to 75 or more consistent participants, and secured at least one institutional co-hosting partnership in LA. Separately, through @asktheaigirl I aim to publish a minimum of 12 pieces of public AI safety content, continuing to build the pipeline of people who find their way into AISLA and the broader AI safety space.
Additional Funding
AISLA hasn’t yet applied to other major grant programs including LTFF. The focus has been on validating the model locally before pursuing larger funding sources.
Why This Matters Now
AI safety is no longer a niche academic concern. It’s actively shaping policy, national security, and real world deployment decisions. People who don’t work directly in the field are now aware of the importance of regulating AI. This will only continue to grow.
Los Angeles has major tech companies, top universities, and a powerful media ecosystem. What it lacks is cohesive AI safety infrastructure. One of the biggest gaps AISLA fills is the quality of engagement. Many spaces are either too technical and exclusionary or too casual and lacking depth. AISLA is intentionally designed to be accessible while remaining intellectually serious.
Long-Term Vision
AISLA is more than a meetup group - it’s an early stage ecosystem. The long-term goal is to evolve into a talent and idea pipeline for AI safety, similar in spirit to programs like MATS but rooted in Los Angeles. The next six months focus on proving the model through consistency, strong participation, and retention.
Kristina Vaia & Nathaniel Burnham | AI Safety Los Angeles https://aisafetyla.com/ | Discord: AI Safety LA | LinkedIn: Kristina Vaia
My original proposal is below:
Project summary
AI Safety Los Angeles (AISLA) aims to launch the city’s first open, action oriented AI safety community. Our mission is to connect technical professionals, researchers, policymakers, and those curious about AI to raise awareness of AI safety risks, collaboration opportunities, and support local talent. This initial funding will lay the foundation for a large, vibrant community that will partner with universities, host guest speakers and seminars, and build a strong online and in person presence.
What are this project's goals? How will you achieve them?
Establish a flagship AI safety community in Los Angeles, open to all backgrounds and disciplines.
Raise awareness and understanding of AI safety risks, best practices, and governance.
Foster cross-disciplinary collaboration, mentorship, and support for those interested in AI safety and alignment.
Build sustainable partnerships with local universities, research groups, and industry leaders.
How we'll achieve them:
Host monthly or bimonthly meetups, workshops, and panel discussions featuring local and visiting experts.
Launch and grow an active online community (Slack/Discord, Twitter/X) for ongoing resource sharing, networking, and event coordination.
Partner with universities (UCLA, USC, Caltech, etc.) for joint events, research seminars, and talent development.
Organize guest speaker series and seminars to bring in leading voices from the AI safety field.
Encourage members to propose and lead their own projects, study groups, or public awareness campaigns.
How will this funding be used?
Venue rental and refreshments for regular meetups and workshops.
Speaker honorariums and travel support for guest experts.
Online community platform costs (Slack, Discord, Circle, Twitter/X).
Marketing and outreach (social media ads, event listings).
Materials and supplies for workshops and collaborative projects.
Stipends for organizers (currently only me, but will source another person fit for the role).
Seed funding for member-led mini projects or public awareness initiatives.
University partnership activities (joint events, campus outreach).
Who is on your team? What's your track record on similar projects?
Kristina Vaia: Connector, networker, and passionate advocate for AI safety. Excited to build a community around what I care about more than anything: connecting people and making AI safety accessible and actionable in Los Angeles.
Advisors/Collaborators
Currently, there are no formal advisors or collaborators. I am actively seeking to connect with local AI professionals, researchers, and group leaders as the community launches and grows.
Track Record
I regularly connect AI professionals and enthusiasts for collaboration and knowledge sharing. I've become to go-to person for starting a career in this space for Carnegie Mellon policy students.
I'm an active, heavy user of AI tools and stay up to date with global AI safety communities.
I'm committed to growing AISLA into a large, impactful hub through partnerships, high-quality programming, and ongoing engagement.
While I haven't yet built or managed professional networks or online communities, I'm really eager to learn and leverage my strengths in networking and community building to make AISLA a success.
What are the most likely causes and outcomes if this project fails?
Most Likely Causes:
Insufficient member engagement or event attendance.
Difficulty securing venues, speakers, or university partnerships.
Overlap or lack of coordination with existing LA tech/AI groups.
Possible Outcomes:
The group remains small or inactive, with limited impact.
Valuable lessons learned about community building in LA; documentation and resources shared for future organizers.
Connections made during initial events may still spark collaborations or future groups, even if the main project pauses.
How much money have you raised in the last 12 months, and from where?
Amount Raised: $0 (This is a new initiative; no prior funding received.) Note: All work to date has been volunteer driven. This application is for seed funding to launch and grow the group.
some additional notes: this funding is just the beginning. My vision is to scale AISLA into a large, inclusive, and sustainable community that serves as a model for other cities. We plan to grow membership, expand partnerships, and increase programming as the group gains traction. Commitment to Impact: we'll track engagement, gather feedback, and transparently report on outcomes to ensure the community delivers real value to LA’s AI ecosystem.
If I Receive the Minimum
With $2,500, I'll
Host 2–3 in-person or hybrid meetups at accessible venues, including refreshments.
Launch a basic online community (Slack or Discord) for ongoing engagement.
Cover essential marketing and outreach (social media, event listings).
Lay the groundwork for university partnerships and future growth.
Prioritize low-cost, high-impact activities to maximize reach and engagement.
If I Receive the Maximum
With $15,000, I'll
Host monthly or bimonthly meetups and workshops, including guest speakers and panel discussions.
Build a robust online presence (Slack/Discord, Twitter/X), with regular content and resource sharing.
Offer speaker honorariums and travel support to attract high-quality guests.
Develop partnerships with local universities for joint events and seminars.
Support member-led mini projects and public awareness campaigns.
Invest in sustained marketing and community growth.