Effective ASI governance requires novel thinking. Here we present conceptual frameworks for discussion and refinement. Your feedback is crucial in shaping these ideas.
Exploring a global agreement for collaborative oversight of advanced AI hardware.
This draft proposal explores a multi-national treaty for shared standards on ASI physical infrastructure. We are seeking feedback on key ideas like shared compute access, verifiable shutdown protocols, and an international hardware auditing body.
A proposed framework for regulating super-intelligent systems within national borders.
This conceptual act introduces a tiered risk model for ASI applications. It suggests stringent safety evaluations and public transparency reports before high-impact systems could be deployed.
An open-ended framework to ensure ASI goals align with evolving human values.
Our initial research explores a multi-layered approach to value alignment, including constitutional AI, scalable oversight, and interpretability. This is an active area of research, and we encourage collaboration to refine these principles.
Help shape the future of responsible ASI governance. Contribute to research, review our concepts, or join the discussion.