Mission Statement

Ensuring AI serves humanity, instead of replacing us

Not Our Successor is committed to building and enacting a policy platform ensuring that future frontier AI is made safe and controllable, aligned to human interests, and deployed in a way that serves the public, with the consent of those whose lives it will change.

In the long term, our goals are (in order of importance) for humanity to survive, to remain in the driver’s seat of our own destiny, to exist in a form recognizable to people today (if we so choose), and to not be rendered obsolete by total automation. We imagine a future where those who want to live (more or less) as people have always lived will have that option. Where meaningful work is still done by willing people. Where technology with capabilities broadly surpassing our own (such as AGI or ASI) is used sparingly, where it’s actually necessary (such as large-scale humanitarian efforts and preventing catastrophes), or where no person is willing to work, with other valuable efforts still done by man.

In the medium term, we want to ensure frontier artificial intelligence development is done responsibly, in a way that minimizes the risk of what we consider the 4 key disasters AI could cause (badly misaligned and uncontrolled AI pursuing its own goals at our expense, terrorists or malicious actors using AI to kill billions, a permanent global dictatorship enabled by AI surveillance and control mechanisms impossible before AI, or the complete disempowerment of humanity). We want to ensure that the goals and values given to near-human-level AI and beyond are decided by the public, not a handful of engineers with no public accountability.

To achieve this, we will implement a policy platform.

  • To ensure AI development is transparent, we will ensure
    • Robust protections are in-place for whistleblowers at frontier AI companies
    • Ensure trusted monitors, insulated from the incentives of the companies, closely monitor the AI development and alignment process at frontier AI labs
  • With this in place, we will ensure development is safe by establishing a safety standard at leading AI labs with multiple layers of protection: specifically
    • Funding research into AI alignment, control, safety evaluations & alignment auditing, and interpretability
    • Ensuring that safe, democratic values are trained into frontier AIs
    • Ensuring that frontier AI are thoroughly tested for alignment before they are deployed or used to train future AI
    • Ensuring that control protocols are established to ensure that even if smarter-than-human AI were ever developed without being aligned, that measures would be in place to limit the damage
    • Ensuring that the true thoughts, plans, values, and reasoning of frontier AI can be discerned using chain of thought or interpretability methods
  • Lastly, to reduce the risk of harm from AIs developed in other parts of the world, we will:
    • Ensure that all companies producing frontier-level AI adhere to strict anti-espionage measures, to prevent foreign adversaries from stealing model weights and other key, dangerous AI technology
    • Use diplomatic and economic pressure to get leading world economies to agree to treaties and implement similar safety measures for any state or private AI efforts abroad
    • Set up a defense architecture capable of enforcing adherence to safe AI development against a state or private entity with access to super-human-level intelligences

How will we achieve these policy goals? We will build public support for our goals by educating the public about AI risk, building a social media presence, working with media and documentarians, speaking at universities, and more. We will use this support to pressure politicians, while speaking directly to those in Washington, doing our own lobbying efforts, and, once we have a team capable of doing so, publishing a list of which politicians are and aren’t on board to apply additional public pressure. We will work with existing political, cultural, and social institutions sympathetic with our cause to use their resources and connections and audiences to further our ability to spread awareness and enact these changes.

We are starting out small, and we face a significant challenge ahead of us, but we know this is possible.

In the short term, we are looking to establish 501(c)(3) or similar status and fundraise enough to hire at least one web developer, social media manager, event planner, accountant, and legal expert. We will set up speaking and outreach events, first in Atlanta where we’re starting, and then expand out nationally. We will find allies in positions of power who believe in our mission and are willing to help us grow. We will work with and perhaps even establish think tanks to help refine our policy platform and strategy.

If you’re interested in helping us grow, please join our mailing list, or email contact@notoursuccessor.org

Join our mailing list