One of the most relevant and immediately noticeable problems of generative artificial intelligence is its systematic error making. The presence of hallucinations is a serious limitation that jumps out at those who generate text or images with artificial intelligence. But is this serious issue solvable? Most people believe that this is an initial flaw that will soon be eliminated by technological evolution. But experts do not see it the same way. In fact, even more instances of hallucinations are being found in new generation models. It should also be noted that the data available on the Internet to train AI models has all been used by now, and therefore we can no longer expect an improvement in quality due to the input of relevant new volumes of information. Amr Awadallah, CEO of Vectara and former Google employee says, “Despite our best efforts, these systems will always hallucinate.” The “hallucinations” are probably not a temporary imperfection, but express an ontological limitation of generative AI due to its statistical nature and the way AI models are trained.
This is a serious limitation, which means the results of generative AI productions have to be checked by people, with the associated costs and risks that in the worst cases make it uneconomic. One strategy to effectively reduce hallucination is to reset model references to global data, used by chatBOTs such as ChatGPT or Deepseek, and train AI on limited, high-quality datasets. The success of this process casts doubt on the progress of general AI, which is in danger of being increasingly subject to hallucinations, and supports the idea of the development of vertical, circumscribed and specialized AI.