@Greg_Colbourn
Global moratorium on AGI, now. Founder of CEEALAR.
https://twitter.com/gcolbourn$0 in pending offers
Greg Colbourn
25 days ago
Note that this is $90k less Manifund fees (same as my donation to PauseAI US - https://manifund.org//projects/pauseai-us-2025-through-q2?tab=comments#2d85cbfd-d392-447c-ad7f-da056aa77928 - just the fees are taken out first here)
Greg Colbourn
25 days ago
It's more important than ever that PauseAI is funded. Pretty much the only way we're going to survive the next 5-10 years is by such efforts being successful to the point of getting a global moratorium on further AGI/ASI development. There's no point being rich when the world ends. I encourage others with 7 figures or more of net worth to donate similar amounts. And I'm disappointed that all the big funders in the AI Safety space are still overwhelmingly focused on Alignment/Safety/Control when it seems pretty clear that those aren't going to save us in time (if ever), given the lack of even theoretical progress, let alone practical implementation.
Note that this is to be considered general funding to PauseAI Global, maxing out the volunteer stipends fundraiser and funding additional hires (from OP: "If we surpass our goal, we will use that money to fund additional hires for PauseAI Global (e.g. a Social Media Director).")
Greg Colbourn
about 1 month ago
(This was 1 Bitcoin btw. Austin helped me with the process of routing it to Manifund, allowing me to donate ~32% more, factoring in avoiding capital gains tax in the UK).
Greg Colbourn
about 1 month ago
I've been impressed with both Holly and Pause AI US, and Joep and Pause AI Global, and intend to donate a similar amount to Pause AI Global.
Greg Colbourn
about 1 month ago
It's more important than ever that PauseAI is funded. Pretty much the only way we're going to survive the next 5-10 years is by such efforts being successful to the point of getting a global moratorium on further AGI/ASI development. There's no point being rich when the world ends. I encourage others with 7 figures or more of net worth to donate similar amounts. And I'm disappointed that all the big funders in the AI Safety space are still overwhelmingly focused on Alignment/Safety/Control when it seems pretty clear that those aren't going to save us in time (if ever), given the lack of even theoretical progress, let alone practical implementation.
Greg Colbourn
11 months ago
Supporting this because it is useful to illustrate how there are basically no viable AI Alignment plans for avoiding doom with short timelines (which is why I think we need a Pause/moratorium). Impressed by how much progress Kabir and team have made in the last few months, and look forward to seeing the project grow in the next few months.
Greg Colbourn
over 1 year ago
This research seems promising. I'm pledging enough to get it to proceed. In general we need more of this kind of research to establish consensus on LLMs (foundation models) basically being fundamentally uncontrollable black boxes (that are dangerous at the frontier scale). I think this can lead - in conjunction with laws about recalls for rule breaking / interpretability - to a de facto global moratorium on this kind of dangerous (proto-)AGI. (See: https://twitter.com/gcolbourn/status/1684702488530759680)
For | Date | Type | Amount |
---|---|---|---|
PauseAI local communities - volunteer stipends | 25 days ago | project donation | 85500 |
Manifund Bank | about 1 month ago | deposit | +85500 |
PauseAI US 2025 through Q2 | about 1 month ago | project donation | 90000 |
Manifund Bank | about 1 month ago | deposit | +90000 |
AI-Plans.com | 11 months ago | project donation | 5000 |
Manifund Bank | 11 months ago | deposit | +3800 |
Alignment Is Hard | over 1 year ago | project donation | 3800 |
Manifund Bank | over 1 year ago | deposit | +5000 |