Those who support the Singulatiry hypothesis often talk of apocalyptic scenarios, in a dystopian approach that also affects many visions of Transhumanism.
Geoffrey Hinton, often called the “godfather of AI,” warned that there is a 10-20% chance that AI could contribute to human extinction in the next 30 years. The argument lies in the consideration that in nature in almost no case does a less intelligent being control a more intelligent one.
Anarchist and terrorist Theodore Kaczynski has advocated the thesis that humans “will be reduced to the level of pets” after there has been sufficient technological progress.
Hugo de Garis states that Artificial Intelligence will eventually dominate or destroy the human race, and even paints this scenario as desirable.
One of the reasonably serious risks of the Singularity is that the profound inequality between the few who possess super-human technologies and the rest of the population could cause the marginalization and impoverishment of the majority.
Dystopian and positivist visions are contrasted.
Of course, the leaders of the companies involved in the development of AI envisage opposite, much more optimistic scenarios, as stated by Sam Altman, CEO of OpenAI, who envisions a future in which AI will help humanity make wiser decisions: AI, Altman says, does not impose choices, but offers the vision of perspectives with their consequences to aid reasoning and decision-making. Children of the future will grow up with AI that is smarter than they are, in a world where technologies actively understand and help, in an era of abundance and very rapid change. Altman expresses the hope that our children will look back on our present with pity for an era more limited than their own.
Yann LeCun, Meta’s chief science officer, believes that AI has the potential not only to prevent the extinction of humanity, but to save us by providing solutions to some of the global challenges, such as climate change, disease and managing the world’s resources.
Starting from the macro types of AI risks that merge from leading analysts, it is possible to classify threats on an apocalyptic scale, that is, those catastrophic threats that can cause drastic changes in the environment or the living conditions of humanity itself:
- Criminal use: when individuals or groups use AI intentionally to cause harm. Here a global disaster could be produced by cyberwar actions of irresponsible regimes, or by AI-enhanced terrorist actions authorities are unable to control.
- Divergence: when AI goals diverge from human goals. An apocalyptic effect could result, for example, from a seemingly rational optimization of a system, such as the maximization of power to data servers, which consumes energy or water resources on a large scale causing a global conflict.
- Errors, that is, malfunctions or unanticipated decisions due to technological limitations. A risk on a global scale could be caused by AI hallucinations in a nuclear missile defense or attack control system. Or from the creation of a bionic microorganism with lethal aspects.
However, none of these hypotheses predict the advent of a superintelligence that would attack humanity to extinguish it.
Criminal use stems from human will.
The divergence and errors rather than intelligence would come from stupidity, both human and artificial.
Thus, the apocalypse does not appear to be related to the Singularity. Expert Luciano Floridi calls the risk of humanity’s extinction by artificial intelligence null and void, as humans will always be able to pull the plug.
In fact, by the time AI is fully integrated into the IoT (Internet of Things) control system of the world’s technological apparatus, which manages communications, energy, armaments, hospitals, transportation, and water networks, turning it off will not be so immediate.
Therefore, well before a Singularity is foreshadowed, it will be good to make sure that these intelligent systems are ethically well set up and reliable.
In nuclear weapons management, in particular, we cannot afford mistakes, either human or man-made. Well before Singularity.

