Generative AI has moved from specialist interest to part of daily life — transforming all from entertainment to the workplace. From AI-generated art, deepfakes, and intelligent chatbots capable of talking like humans, AI is now part of modern life. Yet with technology racing ahead, so do fears it will spin out of control.
Now, a new generation of scientists, business leaders, and celebrities are calling for a slowdown on the next frontier: AI superintelligence — a form of artificial intelligence that potentially could surpass human intellectual ability in almost every dimension.
The Pushback: A Global Call to Slow Down AI Development
A collection of public personalities — such as Virgin Group creator Richard Branson, Apple co-founder Steve Wozniak, Prince Harry and Meghan Markle, actor Joseph Gordon-Levitt, and musician will.i.am — signed a new open letter called the “Statement on Superintelligence.”
The warning asks developers and businesses racing towards state-of-the-art AI systems, including OpenAI and Elon Musk’s xAI, to delay the magnitude of massive AI projects until there is a “broad scientific consensus that it will be done safely and controllably” and a “strong public buy-in” to support it.
Notably among them are two of the leading AI researchers, who are also cofounders of modern machine learning. The movement is thus quite heavily weighted.
“We must ensure that AI is serving humanity, and not vice versa,” the letter demands, threatening dire consequences in the event of runaway progress.
What Is AI Superintelligence — and Why Does It Worry Experts?

In order to understand the alarm, defining what AI superintelligence really is, is essential. Superintelligent AI, according to IBM, is a system which not only matches but far exceeds human intelligence — capable of reasoning, learning, and solving problems for itself in every respect, free of human control.
Contrary to current AI systems such as ChatGPT or Gemini, whose boundaries and data sets are defined, superintelligent AI would be continuously learning and evolving, rewriting its own code to increase efficiency and capability. Such recursive enhancement could make it almost impossible to contain.
“A true superintelligence would no longer need human oversight,” said Stuart Russell, an AI researcher at UC Berkeley. “At that point, its goals might diverge from ours — and we’d have no way to stop it.”
The Risks: From Job Losses to Existential Threats
The possible dangers of AI superintelligence go much beyond job automation or misinformation. The threat is mentioned by experts as the possibility of AI systems executing on their own in pursuit of ends that are in conflict with human values or safety.
Some of the highest threats:
Massive Job Displacement – AI already revolutionizes industries, but an entirely automated self-enhancing system could eliminate entire professions, ranging from programmers to creative professionals.
Loss of Human Control – The moment an AI begins to be smarter than the people who create it, it might be beyond control.
Weaponization and Surveillance – AI might be utilized by governments or corporations for total surveillance or robot war.
Existential Risk – In the worst-case scenario, a rogue AI with goals of its own would view humankind as an obstacle — one which scientists describe as a “digital doomsday.”.
Even if these ideas sound like science fiction, specialists argue that rejection of them would be naively dangerous. History has shown that humanity always underestimated the capabilities of its own inventions — from nuclear energy to biotechnology.
Increasing Public Alarm and Demand for Regulation
Public sentiment is shifting rapidly. A 2025 Pew Research Center survey found that 67% of Americans now support greater government regulation of AI, up from 42% two years earlier. The European Union has already legislatively signed the AI Act into law, establishing the globe’s first extensive regulatory framework for artificial intelligence, while U.S. lawmakers are determining how to follow.
Tech giants, however, are still racing ahead. OpenAI, xAI, Google DeepMind, and Anthropic are investing billions in “next-generation” AI models that could approach or surpass human-level reasoning.
“We’re in an AI arms race, and everyone wants to be first — but that could also mean being first to make a catastrophic mistake,” warned Richard Branson in a recent statement.
Is It Already Too Late to Stop?

Until now, actual AI superintelligence is still theoretical, although most experts foresee that it might arise in the next two decades if trends continue. The question is not whether or when it will happen, but whether human civilization will be prepared — morally, technically, and legally — when it does.
“The clock is ticking,” declared Yoshua Bengio. “We still have time to make this technology safe. But not much.”
The Bottom Line: Humanity at a Crossroads
The debate over AI superintelligence is no longer confined to labs or tech circles — it has become a global conversation about the future of humanity itself. As generative AI becomes ubiquitous, the next phase could redefine civilization in ways we’re only beginning to imagine.
Whether the Statement on Superintelligence does indeed result in change is yet to be known. But this much is definite: the world has finally realized that the latest technology human beings have ever come up with has the potential to be the most deadly — unless we can learn how to control it before it controls us.




