Love to see this grant! Would be excited to contribute to an extension of this grant if first 5 weeks seem promising to Rob
5-week salary to further empirical research into AI consciousness and related issues bearing on potential AI moral patienthood.
Write a strategy doc for research on AI consciousness, and related issues relevant to AI moral patienthood, and elicit feedback.
Spend more time on empirical research projects about AI consciousness, and (as before) related issues relevant to AI moral patienthood—henceforth, I’ll just say “AI consciousness” as shorthand.
Time working on the project goals that would otherwise be used for public-facing work (e.g. speaking to journalists, writing magazine articles, appearing on podcasts) and applying for funding and jobs.
The salary is for Rob Long, an expert on AI consciousness. See Rob’s newly-released report on AI consciousness here. Rob just completed the philosophy fellowship at the Center for AI Safety, and before that he worked on these issues at the Future of Humanity Institute as the head of the Digital Minds Research Group. He has a PhD in philosophy from NYU, supervised by David Chalmers.
Some personal (e.g. family or health) issue taking up Rob’s time.
Rob failing to ward off his time to be taken up by other work priorities.
Failing to properly scope the project goals.
None.
Anton Makiievskyi
about 1 year ago
Love to see this grant! Would be excited to contribute to an extension of this grant if first 5 weeks seem promising to Rob
Joel Becker
about 1 year ago
I have been trying to nudge Rob in this direction since earlier this year.
Earlier this year I was involved in a startling conversation. Rob Long had been speaking about the chances that we will see conscious AIs in the coming years. (And I had started to grok this possibility.) Now, he was talking about research collaborations he might aim for in future. Rob had empirical projects in mind that could only be done with access to frontier models. Should he bug a colleague-of-colleague to work at [top AI lab X]? Should he ask [collaborator Y at top AI lab Z] about the possibilities at his employer? Rob’s conclusion was: not right now. Rob already had his plate full with other work, the request might be annoying and, besides, Rob had already had a similar request to similar people declined-ish a couple of months ago.
This situation struck me as being preposterous. Here is one of the world’s top experts on AI consciousness, claiming a nerve-wracking chance of AI consciousness in the not-too-distant future, with fairly strong professional links at top AI labs and ~shovel-ready ideas for empirical projects, preparing a not-terribly-costly ask (give me a work desk, ~0 engineering time, and model access to work on these research questions)... and he is unsure whether he should make the ask?!
It seemed to me that the right question to ask was more like “should I try to start a research group as soon as possible?”. (Of course there are many reasons why starting a research group might be a bad idea. But, even if that were the case, Rob should at the very least be asking to work at places that would enable him to work on his empirical projects.)
I want Rob to move the needle on empirical AI consciousness projects harder and faster. In the short-term (see below), this means doing less ‘public-facing work and thinking about his professional opportunities,’ and more ‘thinking through a theory of change for AI consciousness research, spending more time on empirical research with existing collaborators (e.g. Ethan Perez), and pushing for ways he can continue this research in the near future.’
First, I don’t think Rob needs funding in some sense. But I’m not super concerned about this. People should be paid for important work and, besides, I’m partly trying to set up a good incentive environment for future grantees.
Second, I think that I can only counterfactually change a fraction of Rob’s professional activities. Currently, his work falls under the following buckets:
Co-authoring a paper with Ethan Perez,
Co-authoring a paper with Jeff Sebo,
Responding to media and podcast requests about his recent paper, and other writing related to that paper, and
Job search stuff, applying for funding.
Bucket (1) is the sort of work that I want Rob to be doing more of: activities that directly move the needle on empirical work in his area of expertise.
I instinctively feel less excited about bucket (2), because this paper will not involve empirical AI consciousness research. But I don’t want to impose on Rob’s pre-existing commitment to this project. Also, the issues of the paper have some overlap with writing a strategy doc. (Though this overlap should not be overstated, as academic work is optimized for different things than a strategy document).
Bucket (3) I think Rob should be doing less of. The public-facing work mentioned above does not obviously move the needle on empirical work — and to the extent it does (e.g. indirectly via field-building or career capital), I would feel better if Rob undertook this work after having reflected more on his theory of change for AI consciousness research, rather than as a natural consequence of the release of his recent paper. And, unlike for bucket (2), giving up on some bucket (3) commitments feels low-downside — Rob is not going to be a less interesting podcast guest in 1 years time!
Bucket (4) feels like a waste of time that I want Rob to avoid.
My understanding is that buckets (3) and (4) add up to a bit less than half of Rob’s time at the moment.
Third, empirical projects in AI consciousness feels like a tricky area where I am extremely out-of-my-depth. I am strongly relying on Rob being a ‘reasonable expert who won’t make dumb decisions that make me regret this grant.’ That said, I feel very good about relying on Rob in this way.
Time buckets (3) and (4) add up to 20 hrs/wk * 5 weeks = 100 hours time. Rounding up to 120 (for possible underestimation, and professional expenses), at $60/hour, I will provide $7,200 funding. I'm leaving up to $12,000 as a funding goal, in case anyone wants to fund the remainder of Rob's time during the 5 weeks.
Please disclose e.g. any romantic, professional, financial, housemate, or familial relationships you have with the grant recipient(s).
I was housemates with Rob for a couple of months in early 2023, which is how I found out about this grant.