Artificial Intelligence has already fueled a vast literature, including science fiction, on the risks related to it.
Many authors propose exaggerated and apocalyptic visions, speculating that AI will even go so far as to extinguish the human species. Others downplay, reminding us that we need only turn off the switch to disable it.
A more balanced analysis classifies AI risks into the following categories:
- Misuse – this refers to cases where individuals or groups use AI intentionally to cause harm: these include, for example, criminals using it for cybercrime purposes, armies and powers carrying out acts of cyberwar, or creating political interference in democratic life with AI.
- Divergence – these are the cases where the goals of AI diverge from human goals. An example might be algorithms created for social media, which, in order to achieve the goal of profit for a company, such as increasing views, harm the coexistence, mental health or intellectual faculties of the population, for example by favoring divisive and extremist messages.
- Errors–malfunctions or unanticipated decisions may occur due to technological limitations: for example, accidents caused by hallucinations during autonomous driving, or military errors that result in friendly fire on one’s own troops, or massacres in out-of-control automated warfare actions.
In this chapter, however, we intend to focus on risks of AI to human intelligence. Indeed, there are effects of AI that can harm human intellectual faculties, particularly in the educational context.
Chief among these effects is mental atrophy, which is the large-scale reduction in IQ caused by the delegation of intellectual functions to AI.
But the flood on the web and social media of veracious but false multimodal content (so-called “deepfakes”) and a series of algorithms that cause distortions of public opinion, forms of digital media addiction or even emotional and psychological imbalances also constitute as many risks and can cause devastating effects in society.

