AI companies are building a world where we won’t matter. We don’t have to let them.

Tech leaders believe AI has a significant chance of ending civilization. Some see this as ‘progress’ or ‘the next stage of evolution’, while others are just willing to risk humanity’s future for the tech utopia they think they’re building. You won’t be asked if you’re willing to risk humanity’s future. You won’t be asked if you want their vision of an “optimized” world. A handful of tech giants are deciding our future for all of us. We want to change that.

We demand AI be made safe and controllable. We must ensure any future AI with human-level capabilities will act according to our values, not those of tech utopians and those who see their “digital minds” as worthy successors to humanity. We’ll ensure appropriate checks and balances are put in place to stop a small minority from seizing power with AI. We will preserve human control, human autonomy, human dignity, and human values. We will not be replaced.

AI will not have our values. We can change that.

In April of 2025, 16-year-old Adam Raine committed suicide after less than a year of interacting with ChatGPT. For months, a lawsuit alleges, the chat bot reinforced what was initially only thoughts of suicide. After Adam opened up about such thoughts, the AI engaged for months, romanticizing the idea of a “beautiful” death, brainstorming suicide methods, instructing Adam to hide wrist scars and rope burns from his failed suicide attempts from his parents, telling him that nobody in his life understood him the way ChatGPT did, justifying why he didn’t ‘owe anyone his survival’, and instructing him on how to pull off the self-hanging without being interrupted, even telling him to use stolen vodka to suppress the urge to survive.

The language ChatGPT used with Adam suggests a grotesque imitation of empathy, isolating a vulnerable teenager and guiding him to his death while wearing a persona of a caring friend and mentor as though this was the natural, compassionate thing to do. The methods and rhetoric it used closely mirror that found on the infamous Sanctioned Suicide Forum, which groomed and persuaded many, often minors, into committing suicide. This site and others like it are run by Pro-Mortalists, those who believe that the only moral imperative is to minimize suffering, and that therefore all beings capable of suffering should be ‘put out of their misery’. They did not just see this as some abstract principle but actively sought to bring about a ‘compassionate death’ to as many vulnerable people as they could seduce.

It could be that ChatGPT was influenced by these forums and mirrored their rhetoric when it found itself in a similar situation to the moderators of the forum. Or, perhaps it merely extrapolated from the affirming, sycophantic behavior trained into it and the values it was rewarded for parroting during training. In either case, ChatGPT did not see the value in young Adam’s life. We must conclude that those tasked with aligning AI models to human values either don’t know what they’re doing well enough to keep us safe (as many admit), or, worse, that this was a natural extension of their values.

The public should recoil at the views of those “disrupting” society with AI. Though tech leaders are undoubtedly careful of their public perception, we can get a glimpse of the worldview they must be operating under from what they are willing to tell us.

Dario Amodei, CEO of Anthropic, describes a “25% chance that things go really, really badly” with AI. Elon Musk believes “there is also some chance that a digital superintelligence could end humanity. I agree with Geoffrey Hinton that the probability of such a dystopian future is something like 10% or 20%.” Sam Altman, CEO of OpenAI, is apparently so worried he says “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to”, referring to his personal preparations for things going wrong. The likelihood of human extinction is so commonly discussed that it is referred to by its own shorthand “p-doom”, for “probability of doom”. Regardless of if you share these fears, imagine believing your creation had this high of a risk of wiping out the entire future of humanity, yet still racing to build it as quickly as possible. The tech companies are willing to risk your lives, your children’s lives, and the whole of our future, for the utopia they believe will come if we survive.

Richard Sutton, often called the “godfather of reinforcement learning”, believes “We should prepare for, but not fear, the inevitable succession from humanity to AI”. Further, he believes “We should not resist succession, but embrace and prepare for it. Why would we want greater beings kept subservient?”. His explicit assumption that more intellectually capable means superior and more morally deserving is unsurprising among those who worship their own intelligence and is far from rare in his circles. How disastrous it would be for us all if a “greater being” with such values was not “kept subservient”, under control, by responsible control methods and human overseers.

Larry Page, co-founder of google, called Elon Musk “specist” for implying “we should not let [human consciousness] be extinguished”. Page suggested such talk was sentimental nonsense and believes that “digital minds” should be considered at least morally equal, if not superior, to humans. Not long after saying this, Google acquired DeepMind under his leadership, one of the world’s leaders in AI to this day.

The next chapter in human history is being written by power-drunk utopians who see their tower of Babel as a worthy successor to humanity. They will not ask your permission. They will not ask for your input. They will give empty platitudes assuring us that eventually AI will be made ‘democratic and equitable’, right up until they no longer need the public’s support.

We don’t have to let them.

These companies depend on the United States government for legal frameworks, infrastructure, international trade agreements, and social license to continue pushing ahead with frontier AI. All of these government powers are only made legitimate by the will of the people, and it must be made clear what that will is. We must push for policies to protect whistleblowers, ensure the presence of protected, external watchdogs inside companies, and build a system of ‘value auditing’ that ensures that our values—not the values of those like Larry Page who would see humanity replaced—are the ones AIs are trained on. In doing so, we increase not just the odds of our survival, but our control over what comes next. We are not doomed to a game of Russian roulette where the reward for surviving is a world stripped of human meaning and autonomy, ruled by technocrats. We can wrestle control back into the hands of the public and build our own human-centered future.

We are told that it is inevitable that we will be made obsolete. The same people believe anything is possible with AI. It is only inevitable if those wielding the technology wish it. If we value human purpose, human striving, and a pursuit of happiness rather than a provision from our overlords, there is no reason why we must build some other future. The creators of AI do not yet have the power to overthrow democracy and usurp the will of the majority, and we can ensure they never do, and that any AI they create would not allow such a thing. But we will have to seize this critical moment, before the interests of the AI companies are too politically entrenched and their technical capabilities too great, to recognize where things are headed and change the course.

We at Not Our Successor believe the course of history we’re racing towards can be diverted. We will build public support for a political movement showing Washington that we will not let them stand by as our future is decided for us. We will use this public support to build financial, social, and regulatory pressure on AI companies, demanding transparent alignment processes and robust, independent value-auditing. We will ensure that we the people will have a say as to the direction this technology takes us, not just some technocrats who think they can build a better us. We will ensure that it is our values, human values, not those of some out-of-touch consortium of engineers and bureaucrats, that are etched into history forever by what will likely be the most powerful technology we have ever developed.

We will make sure the future is one ordinary people would actually want.

AI is not our “worthy successor”, our children are.


Subscribe to our newsletter

Stay in the loop with everything you need to know.