Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

SDCPNs for AI Safety

Science & technologyTechnical AI safetyGlobal catastrophic risks
🍇

Jade Master

ProposalGrant
Closes December 11th, 2025
$0raised
$500minimum funding
$39,009funding goal

Offer to donate

41 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

We aim to develop the theory and practice of SDCPNs (Stochastic Dynamic Colored Petri Nets) so that they may serve as a suitable world model for verification of AI. SDCPNs are a flexible data structure, as well as scientific model, which has application to self-driving cars, power-grid management, supply chains, agent-based epidemiology and more. SDCPNs are categorical and compositional, meaning that they can be trusted structurally while scaling to the size of real world problems.

What are this project's goals? How will you achieve them?

We are creating a control theory and probablistic model checking framework for SDCPNs which is analogous to the theory of Stochastic Differential Equations (SDEs). We are doing this in the language of category theory so we may leverage compositionality to scale SDCPN analysis to real-world problems.

In particular, we are making proof-of-concept implementations alongside peer reviewed papers which prove the correctness of these implementations. We will continue to work with the other groups in the SGAI program to develop the mathematics and software necessary to develop a next generation AI verification system.

How will this funding be used?

As we are already funded by ARIA, this funding will be used as a cash-float and contract extension. Since our above grant payments are in arrears these funds will allow us to pay our bills on time as well continue our work beyond the 1-year contract.

Who is on your team? What's your track record on similar projects?

Jade Master: Project lead, over 8 years of experience in academic theoretical computer science, started Coherence Research to apply her work on compositionality and Petri nets to the problem of AI-safety. Started a spin-off company (GLAIVE research) for phase 1 of the SGAI programme, which has recieved a 2-year grant from Renaissance Philanthropy to study machine generated proofs.

Dylan Braithwaite: Currently managing the above spin-off. Finishing a Ph.D in applied category theory.

Timotej Tomandl: Years of experience in Agda formalization of categorical concepts.

Sean Watters: Ph.D in categorical model checking and type theory.

What are the most likely causes and outcomes if this project fails?

The SGAI program fails to have a suitable world model engine. The technology of AI gatekeeping is never sufficiently developed and AI continues to be unsafe.

How much money have you raised in the last 12 months, and from where?

279,000 GBP from Advanced Research and Inventions Agency (ARIA)

CommentsOffersSimilar8

No comments yet. Sign in to create one!