I think increasing the average layman's understanding of AI safety is a crucial (and frankly underpulled) lever. As someone that does occasional AI reporting through YouTube, I've seen firsthand how impactful publicly validating concepts like misalignment can be towards developing concerns around risks that most of the public still considers "science fiction", like AI takeover. I also think organic content of this sort is relatively high-ROI given the marginal cost per impression (low relative to other means).