@alenglander, when do you expect to hear back from the LTFF? Was the Nonlinear Network funding successful?
I received a grant from the Effective Altruism (EA) Long-Term Future Fund (LTFF) in August 2021 to pursue a part-time PhD in AI safety for 2 years. This application is to extend that grant to allow me to continue pursuing my PhD full-time.
My research focuses on risk and decision analysis under extreme uncertainty, with potential existential risks from AI as the primary (but not exclusive) use case. This was also the subject of a previous project I led – Modeling Transformative AI Risks (MTAIR). I currently have two primary research directions: (1) developing general methodologies for decision making in highly complex, uncertain cases – which is a large percentage of the decision problems faced by EA; and (2) identifying and elucidating the cruxes in AI risk debates.
My advisors are Tim Oates and Jimmy Foulds at UMBC, and David Krueger at the University of Cambridge.
First, a PhD of any sort would be useful for influencing technical and policy research at the Johns Hopkins Applied Physics Lab (APL), where I work. APL is a large research lab of about 8,000 scientists, engineers, and policy analysts with close ties to the US government and military. PhDs are greatly respected in those circles, and having a PhD would greatly increase my chances for proposing x-risk related projects or influencing x-risk related policy analyses.
Second, much of my research focuses on developing methodologies for decision making in highly complex, extremely uncertain cases. If I succeed in developing improvements in this area, this would be of potentially great interest for many EA-related decision problems across many cause areas (and for cause prioritization in general).
Third, the primary use case of a "highly complex, extremely uncertain decision area" that I've been focusing on is AI existential risk. Any improvement in decision making in this area in particular might be particularly impactful.
Fourth, as part of this research I have been focusing much of my attention on identifying, clarifying, and analyzing the "cruxes" involved in AI risk debates. The current debate landscape can be very subtle and confusing even for experts, and a clear exposition of the different positions, arguments, and cruxes may be very beneficial for moving the discussion forward both among experts and in the general public.
Option 1) Part time PhD / research position for 6 months, to be reassessed after that period: $60,000
In this case funding would be equivalent to what I would be earning at APL (income + benefits) if I worked 30 hours per week, which is approximately how much time I've recently been spending on average on my research. My medical situation fluctuates a lot so it could easily get worse again, but I am hopeful that due to allergy shots, medication for chronic pain, and a few other factors it will continue getting better over time. Because of this, the idea would be to provide initial funding for 6 months and then reassess after that time.
Funding breakdown:
Replace 6 months of APL income + benefits at 30 hours / week: $50,000
Tuition, textbooks, etc.: $6,000
Buffer: $4,000
Option 2) Full time PhD / research position for 2 years: $320,000
This would be equivalent to what I would be earning at APL (income + benefits) if I worked a full 40 hours per week. As mentioned earlier, I am not currently working this much due to medical issues, but I have seen pretty steady improvement (on average) in my medical condition over the past few months and I am now working approximately 30 hours per week on average where before that I was probably averaging only about 20 hours. I am hopeful that my medical situation will continue to improve over the next few months, and I will gradually be able to move towards devoting more and more of my time towards my research and coursework.
This level of funding would give me the financial support to devote as much time as I can towards my research (up to approximately 40 hours per week or perhaps even a bit more), subject to whatever my medical condition allows. If, for example, I find that within the next month or two I can devote more than 30 hours / week on average to my work, then this level of funding would provide the financial support to do so.
Funding breakdown:
Replace 2 years of APL income + benefits at 40 hours / week: $280,000
Tuition, textbooks, etc.: $25,000
Buffer: $15,000
Option 3) Full time PhD / research position for 2 years + removing financial and psychological stressors: $375,000
This would be the same as 2 above, but with additional funding aimed at improving overall productivity by removing some of the financial and psychological stressors in my life.
Funding breakdown:
Replace 2 years of APL income + benefits at 40 hours / week: $280,000
Tuition, textbooks, etc.: $25,000
Additional income to help alleviate financial stressors: $25,000
Additional house help (to free up my time and physical / emotional energy): $8,000
Increased quality of downtime / vacation (hopefully leading to improved physical and emotional wellbeing and reduced barriers to work, particularly during times when I'm not feeling well for medical reasons): $15,000
Buffer: $22,000
Prior to my earlier LTFF grant I led a 15-person team on a project that later became the Modeling Transformative AI Risks (MTAIR) project. The results of that project were published on the Alignment Forum and in a 150-page report here: https://arxiv.org/abs/2206.09360. My current research is largely an expansion on that project.
Academically I have a very strong background and excellent advisors. I graduated Summa Cum Laude from the University of Maryland, Baltimore County (UMBC) with undergraduate degrees in both mathematics and philosophy (both of which are relevant to my current research), as well as a minor in computer science. I have three PhD advisors: At UMBC my advisors are Tim Oates (an expert in machine learning with decades of experience) and Jimmy Foulds (AI/ML, probabilistic models, Bayesian inference). I am also advised by David Krueger at the University of Cambridge.
Additionally, I have previously written several long, extensively researched analyses of a similar type to the analysis of AI x-risk cruxes that I am currently focusing on. Those previous analyses were very well received and many people have told me they were extremely clear and well-argued, which leads me to think that I may have something of a comparative advantage in this type of analysis. (On the other hand, those previous analyses were in very different subject areas, primarily focused on debates in philosophy and religion, although it seems to me that the type of analysis I used then is very similar in important ways to the type of analysis I am trying to use now for x-risk debates.)
Outcome of previous LTFF grant:
My earlier grant allowed me to drop down to part-time work at the Johns Hopkins Applied Physics Lab (APL) so that I could pursue a part-time PhD in AI safety. Based on advice from my advisors, I eventually switched from the Computer Science department to the Information Systems department, and I switched my research from technical AI safety to risk / decision analysis with AI x-risk as the primary use case.
The MTAIR project later received a substantial grant from the FTX Future Fund regranting program (with David Manheim as the regrantor), which among other things allowed me to switch to full-time PhD research.
(Note: I originally only pursued a part-time PhD because at the time I was under the impression that in order to pursue a full-time PhD program I would need to quite my job, which I was not willing to do. I have a wife and three children, and I did not have the time availability to both work part-time while also doing a full-time PhD program. I only found out later that I could take an educational leave of absence from APL to do a full-time PhD program without actually quitting my job.)
I used the LTFF and FTXFF grants to spend a lot of time on my PhD research, and to take three university courses (although two of those courses were undergraduate prerequisites rather than graduate level courses).
Unfortunately, however, I have not been able to accomplish nearly as much over the past two years as I was hoping, due to medical issues that became gradually worse over that time period. Mostly the medical issues are related to extreme chronic sinus pain that can make me unable to work productively sometimes for weeks at a time, and even when I can work I'm often sluggish or brain-fogged. I had a surgery last May that helped somewhat, but the underlying problem seems to be related to strong allergic reactions triggering my sinuses. I am currently taking allergy shots and medicine for chronic pain which appear to have finally led to a gradual improvement over the past few months, but I am still probably only at about 60-70% of my previous capacity. I am hopeful that things will continue to get better though and that within the next few months I will be back to my old self.
Much of my research is still unpublished (my medical issues have made long-form writing particularly difficult for me for some reason), but I did at least co-author one article that I worked on as part of my research: https://www.lesswrong.com/posts/XAS5FKyvScLb7jqaF/cross-post-is-the-fermi-paradox-due-to-the-flaw-of-averages. I also have a few posts related to my research on LessWrong (https://www.lesswrong.com/users/alenglander).
Any research related to AI x-risks has some potential for information hazards, particularly if it is used to inform policy decisions.
Currently I have funding from the Long Term Future Fund, but that funding is set to lapse at the end of August 2023. I have applied to LTFF for continued funding, but I have not yet received a decision from them. I have also submitted an application to the Nonlinear Network AI safety funding round.
Ryan Kidd
over 1 year ago
@alenglander, when do you expect to hear back from the LTFF? Was the Nonlinear Network funding successful?