The MoSSAIC framework is good foundational work that pushes mechanistic interpretability in a more scientifically grounded and principled direction. I recommend funding this work. This paper provides a meta-level intervention while identifies a core assumption underlying most contemporary safety work (the "causal-mechanistic paradigm") and systematically demonstrates why this paradigm will likely fail as AI systems become more capable. It does this not only through abstract speculation alone, but by connecting concrete empirical results (Bailey et al. 2024 on obfuscated activations, McGrath et al. 2023 on self-repair) to a sequence of plausible threat models. I have high respect for Matt Farr for taking the initiative to work on something intrinsically valuable to the Alignment field yet that is misaligned with the incentives of other main players. I believe that this is a diagnostic and constructive work that questions the dominant paradigm that the field is desperately needed yet not being talked about nor discussed enough. The field systematically underfunds this kind of work because it does not produce legible outputs but the MoSSAIC framework is an example of a legible output. I encourage people with more resources to retroactively fund this work.