There is an insidious and structural form of disruption of the content reading process, which is hypertextuality itself, that is, the fact that content is not presented in a sequential, unified, in-depth form, but almost always appears as small pages or modules of extremely concise content related to each other with links.
Hypertext was born even before the Internet (in Italy in the 1980s the first Hypertext User Group was born in Milan, Italy, in which the authors had participated) but it became from the very beginning the language behind the Web: HTML, an acronym for HyperText Markup Language (literal translation: hypertext markup language).
HTML is a public domain language whose syntax is established by the World Wide Web Consortium (W3C). A number of studies have looked at the impact of hypertext on the ability to concentrate and deepen. In general, reading on the Web rarely reaches the level of depth allowed by a paper book. The hypertext par excellence, the world wide web, however, enables immediate access to an unlimited wealth of information, promotes comparisons between various sources, speed in moving from one topic to another, the possibility of having immediate explanations of terms, free paths of in-depth study that on paper become complicated. So the challenge between paper book and hypertext web remains open. It is unreasonable to think that humanity, faced with the cognitive limits of hypertext and the world wide web, will return to the paper book. It would be logical to focus on those limitations and try to overcome them. By designing forms of hypertext that can develop deep forms of thought. The advent of generative artificial intelligence, which feeds off the web, shifts the confrontation with the book into new contexts, enabling conversational and operational experiences that neither the book nor the web made possible. This is one of the goals of the “Homo Extensus” project.

