Some of the most talented AI people in the world are Poles. 10 out of 50 early OpenAI employees were Polish (source: https://therecursive.com/openai-in-poland-fireside-chat-with-wojciech-zaremba-sam-altman-and-szymon-sidor/ ). Most of them come from just one elite university: University of Warsaw. Remarkably, not only isn't there any AI safety org trying to reach its students. There isn't a single AI safety org in the whole Poland.
Reaching out to Poland's best technical students and convincing them that AI safety is a thing seems like a neglected, impactful and tractable cause area. Furthermore, it seems that progress could be made quite inexpensively.
Concretely: the project would provide the first exposure to AI safety for skilled technical students. Ideally, we would help the students gain enough knowledge and expertise to be a good fit for programs ranging in difficulty from ML4Good to MATS.
To be clear, by AI safety I mean decreasing x-risks and s-risks posed by a future unaligned artificial superintelligence. There are many paths toward that noble goal. I plan to focus my fieldbuilding efforts mainly on the technical route, and to a lesser extent on outreach and policy. I currently regard a complete moratorium / ban on frontier AI model development as completely unrealistic, so I will not advocate for it.
The funding will be for 1 semester of fieldbuilding.
Organizing events: presentations, hackathons, networking, 1 on 1 mentoring, group projects
Creating a university group with regular meetings
Advertising costs, both offline and online
Travel & accommodation costs for presenters from outside Warsaw to come and present on campus
Optionally: some swag (stickers etc.), pizza & soft drinks for events to help win goodwill of the students
Significant room for scaling up the project with more time and funding: reaching students from relevant but less strictly technical programs (e.g. cognitive neuroscience), reaching other good universities in the area, "preaching" AI safety at tech meetups, running the project for longer.
Most probable collaborators:
Poles I met at ML4Good: Mikołaj Kniejski, Jakub Nowak, Michał Skowronek
Active members of EA Poland who are based in Warsaw (~15 people)
Active members of Polish online AI Safety reading group
Most probable consultants for the project:
Charbel-Raphael Segerie (CeSIA), Nia Gardner (ML4Good organiser), Carolina Oliveira (operations, Condor Camp)
Experienced EA university group leaders and fieldbuilders such as Chris Szulc (leader of EA Poland)
Polish AI safety "rockstars" involved with EA, such as Tomek Korbak (currently at Anthropic)
Track record (project leader - Piotr): attended University of Warsaw's CS program myself + my high school classmate is currently teaching there as a PhD. Created a profitable 2-person company at age 20. Recruited 300 people in 2 months for a non-profit project via unpaid Facebook marketing in early 2020. Worked 5+ years as a software dev, briefly as a junior ML dev, attended ML4Good, been familiar with the EA movement & AI alignment ideas since 2014.
There is really no possibility for a complete failure. In the worst case I think we would manage to convince much fewer people than we initially hoped. But we plan to reach everyone we can, one way or the other (one could DM every student on Messenger or email if all else fails).
The most likely obstacles I can foresee would be someone from the faculty actively trying to prevent us from reaching the students. In that case, we would try to speak to the students directly, e.g. online. We would also try our luck with other programs and other universities in the city.
Zero. It would be my first grant. I provide my own salary, so one could say I have raised that. I'm able to pay myself because of my past investments (I had invested in real estate with the money earned as a programmer).
My income is only ~650$ net a month (which has purchasing power comparable to ~1300$ in the US), so I am unable to fund more than just my salary.
I like libertarian / randian aesthetics, am quite frugal and will not waste your money.