I think that if we make sufficiently smart humans, and they come of age before we have lost the future to AI, then x-risk will be much much lower. I think I’m at something like an expected 50% reduction in x-risk, given this works and timelines are long enough. Tsvi has longer timelines than I do. Maybe he’s right, or maybe AI will hit a wall, or maybe we will manufacture a slowdown.
As I currently see it, our best hope for the future is that we stop developing AI for a long while. I’m glad many people are working on that, but I think it’s also important to work on the complements to that: plans for “winning the peace”, as a friend of mine put it. If we make it to a substantial pause, my guess is it will still be urgent to get to more robust existential safety, and we’ll be glad for starting earlier.
So I’m donating $7k to Berkeley Genomics. I am considering giving more, but I also do value my own financial flexibility pretty highly for x-risk reduction.
I have some concerns with this plan that haven’t been that thoroughly addressed by Berkeley Genomics[1].
For example, I think that most people interested in germline engineering want to only create people who are predicted to be a bit smarter than anyone who ever lived. Though this seems like a wise and good deontological constraint to me, I’m worried about its consequences. It seems wise and good because our predictors and our linearity assumptions probably break down if we try and push on them too hard, and we risk doing something counterproductive or creating an unethically harmed child. But I also worry that this makes this project less differentially helpful for alignment over capabilities than I’d like.
I think at some level of smarts, you spontaneously realise that AI loss-of-control is a problem (assuming I’m right about that). But I’m not sure at what level of pure smarts that happens. I think, for example, Von Neumann had higher g than the founders of the AI X-Risk field, but I fear that Von Neumann might have been a capabilities researcher. Slightly smarter than Von Neumann isn’t obviously a level at which you spontaneously notice AI X-Risk.[2]
Another concern: I understand that we don’t have very good predictors for personality traits, which, I’m told, are less well modelled by an additive assumption on the variants. I think it would be good to screen on malignity of personality; I suspect it might be unethical to exert a lot of specific control over a child’s personality for instrumental reasons, but I think it would be good to at least check that the child isn’t unusually cruel or domineering or something.
I personally have some ethical uncertainty about genomic engineering. I’m unsure about the appropriate deontology for relating to choosing traits of the unborn, and some people I take seriously are worried about risks of dystopias[3]. So it seems great that Tsvi is writing a lot about the ethics of this approach. I think that this team seems unusually likely to stop their work if they uncover good enough arguments that it’s a bad thing to do (whether because it’s immoral or because it won’t work). AFAICT, their initial advocacy approach is carefully and publicly building sane ethical and technological frameworks. I expect that Berkeley Genomics’ writing will be the most helpful stuff I read in the next year for thinking about the ethics of germline engineering.
I’ve spent a decent amount of time reading and thinking about this. It seems like a good sign that, a lot of the writing that best addressed my concerns was written by Tsvi over the last few years. For example, when I was trying to think about how plans that take a long time interface with timelines, I found this post by Tsvi helpful. There are definitely a bunch more things I’d like to add into the model in the blogpost to reason about this, but I like that he put in the legwork to do the first quantitative modelling pass!
I’m currently deferring fairly hard on the technical picture. I’ve spent some time trying to understand the problems and approaches as Tsvi sees them, but not that much time trying to redteam him or question his assumptions. I hope to spend more time thinking about the technical side in the future.
I feel pretty excited about broadly supporting Tsvi, and also about the specific focuses of Berkeley Genomics. I hope they succeed at reaching their goals!
[1]: Though Tsvi’s been super up for engaging me on concerns I have! I just have to find the time and availability.
[2]: I guess they wouldn’t have to really “spontaneously” notice this problem, but rather come to the correct conclusion (whatever that may be) given the arguments already present in the world.
[3]: Though I’m not, on my inside view, worried about that yet.