You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
## Project Summary
SASI (Structural Alignment for Safe Intelligence) is an open-source constitutional protocol that redefines AGI safety: human agency isn't a preference—it's a mathematical condition for system viability. If humanity is marginalized, the system collapses by design.
Live demo: https://sasi-core-simulation-s1-s3.fly.dev/
GitHub: https://github.com/Miguel794-droid/SASI_CORE_Simulation_S1_S3
## Project Goals & Execution
- **Goal**: Develop S₂—multi-agent simulation with real LLMs (GPT-4o, Llama 3.1) to validate symbiosis stability in complex environments.
- **Execution**:
- Build agent-based simulation framework
- Implement structural veto mechanism (V(E) = E/(1+E))
- Publish all code under MIT License
- Create technical whitepaper for AI safety community
## Funding Use
- $350: Cloud computing (APIs, servers for multi-agent simulation)
- $150: Technical documentation and whitepaper publication
## Team & Track Record
- **Miguel Saavedra **(Nicaragua) – Sole researcher and developer
- **Track record**: Successfully built and deployed S₁ phase as public interactive validator demonstrating mathematical collapse when human effectiveness (E) approaches zero
- No prior funding received – fully self-funded until now
## Failure Analysis
- **Most likely cause of failure**: Insufficient computational resources to run meaningful multi-agent simulations
- **Outcome if fails**: Project delays but core architecture remains valid; will continue with minimal resources while seeking alternative funding
## Recent Funding
- $0 raised in last 12 months
- Fully self-funded through personal resources while working as security guard