@Rachel l, thank you very much for the constructive feedback. Apologies for not replying sooner I have been sick. @xBruce did not lead with the most important aspect of this project. This is transparent AI research, and in many ways transparent AI is safe AI. I think Bruce has decided it's more effective to work contracts and wage labor to support this project. I feel that if this is the go-to mode of supporting transparent AI research is of paramount importance. I've recently encountered another transparent AI researcher with similar struggles. So going forward I guess I would like to explore some options to help this sort of grid lock that is happening. As people who code we are strong at abstract thinking and aren't exactly known for our communication skills. Is there some support with writing grants available? Also, there seems to be a lot of group-think around the idea that neural networks are the way forward, to the exclusion of research in other areas like ours. This isn't just about my project, but all the projects that are bootstrapped and struggling for support while neural networks enjoy an acceleration. I'm confident that having more tools in our AI toolbox will lead to safer AI and I don't feel it's necessary for any one tool to do every job. And even if some of these techniques lead to a dead end, we will have learned something from the research. Do you think there are ways we can explore to help support the advancement of transparent AI?