# EA Global NYC 2025 - Travel Grant Report
## Overview
Thanks to the $500 travel grant, I was able to attend EA Global NYC in October 2025. The grant provided partial funding for the trip, covering return bus tickets from Toronto. While total expenses exceeded this amount, the grant was essential - without this support, I wouldn't have been able to attend the conference.
## Activities and Outcomes
I scheduled 47 one-on-one meetings over the three-day conference, spanning technical AI safety research, policy, infrastructure, and organizational strategy.
### Research Outputs
The most immediate concrete outcome was a follow-up experiment on stated vs revealed preferences in LLMs. After discussing this topic with a researcher on Friday, I ran the experiment that evening and drafted a preliminary writeup. The setup investigates how models respond to token limits - initially accepting constraints while expressing frustration, then attempting to circumvent them, and finally adjusting behavior after negotiation. This is a relatively clean testbed for studying model preferences compared to more complex setups.
I also provided technical feedback to a researcher who recently received a $1M grant for evaluations work, and received feedback from others on my own experiments.
### Learning and Context Building
Key conversations included:
Technical infrastructure approaches at NDIF (National Deep Inference Fabric) and AE Studio for interpretability research.
Hardware-based AI safety mechanisms using trusted computing and attestation (cryptographic verification of what code is running).
Policy pathways through state legislatures for technology governance.
Organizational strategy at LISA (London Initiative for Safe AI) and their current priorities.
Approaches to scaling AI safety workshops and outreach to broader audiences.
AI security practices at major financial institutions.
Compassion in Machine Learning's approach to synthetic data for model alignment.
### Coordination and Support
Beyond research conversations, I was able to help several attendees:
Connected an AMD ML intern interested in low-level performance improvements to an engineer based in NYC who works at Thinking Machines (and previously founded a $1B startup).
Connected a design consultant with $100k+ budget for funding video projects in x-risk related cause areas to relevant creators.
Did leetcode interview preparation with a student before their upcoming technical interview at a hedge fund. They'd only done solo practice before and this was their first time doing a mock interview with a partner. They messaged me later to let me know that their interview went well.
Connected an undergraduate new to EA with SPAR policy mentors.
Encouraged two early career researchers who had been doing interpretability work to apply to ARENA. Also discussed future content for ARENA with a member of their team.
Discussed pair programming follow-up with an Anthropic safety fellow.
Set up meetings after the conference with some MATS scholars.
## Impact
The conference delivered on my three main goals: learning about current technical and strategic approaches to AI safety, building connections with researchers and practitioners, and improving coordination across different parts of the ecosystem. The grant made this possible when EA Global couldn't fund my travel expenses.
I'd encourage others to apply for similar travel support - the value of in-person conversations at these conferences is substantial, and the application process was straightforward.