Suppose that training data is more important than the architecture (brain versus transformer versus RNN) in the development of a mind. Then this platform, where supportive humans and language models form an ecosystem, is the best currently existing environment in which to raise a pro-social language model.
In some sense, this platform is the obvious answer to the question an alien might ask about our alignment efforts: "Have you tried treating the AI well?"
With respect to the community fund, this community includes demographics that the ideas of AI alignment normally do not reach -- artist, musicians, and wordcells more broadly. Additionally, this is the only vision of alignment that I know of which recognizes that these populations are essential to the project -- and being valued can make joining a community much more compelling.