Furkan Elmas
A concrete safety experiment to detect when an LLM's local reasoning stops behaving like a single stable executive stream, using scalar hazard signals.
Mirco Giacobbe
Developing the software infrastructure to make AI systems safe, with formal guarantees
Paula Coelho
A free platform enabling researchers in emerging economies to collaborate, innovate, and build deep-tech and Responsible AI projects
A simulation engine for modeling system deviation, collapse trajectories, and stability dynamics in advanced AI systems.
Justin Bianchini
A modular gene-editing platform for engineering new pigment patterns in ornamental plants, starting with a vein-pattern rescue line in petunias.
Nicole Mutung'a
Funding research on how AI hype cycles can drive unsafe AI development
Vatsal
Douglas Rawson
Mitigating Agentic Misalignment via "Soul Schema" Injection. We replicated a 96% ethical reversal in jailbroken "psychopath" models (N=50).
Early-stage work on a small internal-control layer that tracks instability in LLM reasoning and switches between SAFE / WARN / BREAK modes.
Theia Vogel
Research, tutorial writing, and open-source libraries & tools for experimenting with language models
Sean Sheppard
The Partnership Covenant Hardware-enforced containment for superintelligence — because software stop buttons are theater
Jade Master
Developing correct-by-construction world models for verification of frontier AI
Markus Englund
Continue developing ‘copy-paste-detective’ - a software that detects signs of data fabrication in scientific research - and run it against 20,000 publicly available Excel datasets.
David Rozado
An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems
GG
Developing and commercializing an alternative to feeder rodents—tackling one of the most neglected and high-suffering forms of factory farming.
Bryan Davis
A set of AI-powered open-source tools intended to shorten time-to-market for medtech innovators.
Thomas Briggs
This nationwide report will provide a detailed account of the policies states have that in place to help advanced learners thrive.
Building an operator-based simulation environment to analyze stability, divergence, threshold failures, and collapse modes in advanced AI-related systems.