OpenAI is developing AI that could surpass human abilities. Is this development genuinely something to celebrate?
The genius aluminum foil freezer hack that slashes defrosting time and cuts your energy bills
How long do you really need to walk to burn 1 kilo of fat? Science reveals the answer
What if artificial intelligence ultimately surpasses human beings? This scenario, once confined to science fiction just five years ago, now seems increasingly likely as generative AI technology advances. Sam Altman, CEO of OpenAI, has even made achieving this “superintelligence” his ultimate goal. But is this prospect grounded in reality or fantasy? And is it genuinely desirable?
Sam Altman’s Fixation with Superintelligence
“Superintelligence will be the most transformative technology ever developed by mankind and could help us solve many of the world’s most significant problems. However, its immense power could also prove to be very dangerous, possibly weakening humanity or even leading to our extinction.”
Stunned detectorists uncover 1,400-year-old Anglo-Saxon gold treasure—experts say missing eye may hold secret meaning
Meet Finn: The Astonishing Cat as Big as a 9-Year-Old Child!
These concerning rather than exciting words were published by OpenAI in its newsroom on July 5, 2023. At that time, the startup was basking in the explosive success of ChatGPT, which had been recently powered by the GPT-4 model. Since then, the tool has continued to evolve: image analysis, advanced voice mode, visual generation, web search… These innovations are bringing OpenAI closer to its most dizzying ambition.
And Sam Altman is looking at the short term. By the end of 2024, he projected the realization of a superintelligent AI “within a few thousand days”. As of last June, he even considered this process to have already begun.
“Humanity is on the verge of creating digital superintelligence, and so far, it’s a lot less bizarre than we might have imagined. Robots are not yet wandering the streets, and most of us don’t spend our days talking to an AI. […] And yet, we have recently designed systems that surpass human intelligence in many ways,” wrote Sam Altman on his blog in June 2025.
Sam Altman is not the only one enthused about superintelligence. In a similarly messianic tone, Mark Zuckerberg, who is still struggling to build a useful AI, envisions a “personal artificial intelligence” that will “help everyone achieve their goals, create what they want to see in the world, experience all kinds of adventures, be a better friend to loved ones, and become the person they aspire to be.”
What is Superintelligence?
The term “superintelligence” refers to a type of artificial intelligence capable of surpassing human cognitive abilities in nearly all areas: creativity, scientific reasoning, decision-making, or social understanding. It is a theoretical concept inspired by OpenAI, but originally from the book Superintelligence: Paths, Dangers, Strategies, published in 2014 by Swedish philosopher Nick Bostrom. The author describes a system capable of improving its own capabilities until it escapes the control of its creators.
To understand this concept, it’s essential to distinguish between two levels of artificial intelligence:
- Weak AI: This refers to current systems specialized in specific tasks like automatic translation, image generation, voice recognition, or autonomous driving. Their scope is limited and closely depends on human data on which they have been trained. ChatGPT, despite its performance, falls into this category: it simulates certain forms of human intelligence through statistical language analysis, but does not truly understand what it produces and does not act autonomously.
- General AI: This refers to a machine capable of reasoning, learning, and adapting to any situation, much like a human being. Unlike weak AI, it is not limited to a particular domain and can transfer its knowledge from one context to another. Superintelligence, in this logic, would be its ultimate evolution: an AI capable of self-improvement exponentially, without human intervention.
Such an entity would surpass humans not only in processing speed but also in the quality of its reasoning, its creativity, and its understanding of the world. However, this raises significant challenges in terms of control, ethics, and safety.
Is Superintelligence Really Possible?
While Nick Bostrom theorized superintelligence and warned of its dangers, OpenAI appears to be the first organization to pursue its realization with such fervor. But is this horizon truly achievable?
In theory, yes, many experts respond. Yann Le Cun, head of AI at Meta and a pioneer in generative AI, stated last February that there was “no doubt” that the advent of superintelligence would occur “sooner or later.” However, in terms of timing, the French-American researcher is much more cautious than Sam Altman. According to him, the current models of generative AI are still far from a true superintelligence.
The current techniques used in chatbots and others are not sufficient… Understanding the physical world is much more complex than understanding and generating language, explains Yann Le Cun.
That’s where the main limitation of current models lies: they manipulate language without truly understanding its meaning. Their intelligence remains purely statistical, based on word correlations rather than a true understanding of the world. Thus, there is still a significant gap to bridge.
Demis Hassabis, director of Google DeepMind, believes that a general artificial intelligence could emerge around 2030. A timeline further out than what Sam Altman dreams of but still remarkably close.
Should AI Development Be Slowed Down?
Superintelligence no longer appears as a distant hypothesis. However, the emergence of such innovation without safeguards comes with considerable risks: lack of human control, manipulation of information, immense power concentrated in a few hands…
On October 22, 2025, over 800 researchers, entrepreneurs, and notable figures signed a call launched by the Future of Life Institute requesting a halt to work on superintelligence. Among them, Geoffrey Hinton, Stuart Russell, and Steve Wozniak warn about the lack of scientific guarantees regarding the safety of such systems. Their text calls for a moratorium to assess the real risks of a technology that could disrupt the world’s balance.
We call for a halt to the development of superintelligence until there is a scientific consensus that it can be built in a controlled and secure manner and until there is support from the public, can be read on the initiative’s page.
Major AI labs declare support for regulation but advocate a different approach. Sam Altman and Demis Hassabis argue for the establishment of global governance, believing that slowing down research would leave room for less scrupulous actors. The Superalignment program, launched by OpenAI in 2023, fits into this logic: it aims to verify and test the reliability of future systems to ensure their safety.
However, several experts, like Timnit Gebru and Yoshua Bengio, see this primarily as a way for these companies to maintain control over a framework they themselves have helped make indispensable. Over the past five years, the development of the most powerful AI models has proceeded without significant public oversight. The industry giants have advanced at a breakneck pace, imposing their technologies before any regulations were established, following a strategy of “fait accompli.” In such conditions, one can legitimately question the ability of AI companies to design a superintelligence that is truly beneficial for everyone.
Similar Posts
- Zuckerberg Claims Superintelligence Development Within Reach: Breakthroughs Imminent
- Google’s Gemini 3 AI Takes on ChatGPT: Revolutionary Model Unveiled!
- GPT-5 Launches This Summer: Explore the Exciting New Features Awaiting You!
- OpenAI Acquires Jony Ive’s io Startup: A Bold Move to Reshape Computing!
- Apple Stung by OpenAI: Is a Tech Giant Awakening? – This Week in Apple News

Jordan Park writes in-depth reviews and editorial opinion pieces for Touch Reviews. With a background in UI/UX design, Jordan offers a unique perspective on device usability and user experience across smartphones, tablets, and mobile software.