Humans must be “above” Artificial Intelligence, that is, they must be in control of the processes carried out with AI support. This faculty of control must be made possible and explicit in educational techno-methodologies.

The neologism Above AI indicates a cultural, methodological and User-Interface Design principle that places humans above Artificial Intelligences.

The basic idea is to enable the human user to maintain vigilant, critical, and informed control of AI applications and content.

The application areas are numerous, from education to content production, from scientific research to marketing, from professional services to entertainment. In all these contexts there is a risk that the user will be absorbed, encompassed within individual AI applications, particularly generative ones, whose logic, sources and consequences he or she cannot perceive.

A dangerous tendency of generative and conversational AI applications, e.g., ChatGPT, is to take a monologic and monopolistic approach, creating forms of dependence of users, who “hang on the lips” of the artificial oracle by making it the sole or main source of their information, skills and knowledge, and losing sight of the criteria and orientations of the model itself.

An additional risk is the unwitting and massive transfer of personal data and intellectual property.

Several strategies need to be developed to gain control of AI. The first goes back to the Caesarian motto “divide and conquer”: it is easier to control several divided AIs than one integrated and homogenized one. From an application point of view, the Above AI approach allows, for example, to compare the output of different AI systems, thus highlighting the presence of filters or limitations within them.

A second strategy is related to positioning: we should avoid being subjected to an oracular and monologic AI (such as that of ChatGPT and generalist conversational interfaces in general), but we should also be wary of being “flanked” by an AI model that acts as an equal assistant, sooner or later destined to condition us. Instead, we need to place ourselves “above” AI to maintain a general view of the training process and its technological components, aimed at specific goals.

These demands lead us to incorporate more artificial intelligences, controlled by systems and processes external to them, into educational pathways.

The goal of Above AI applications is to enable humans to go beyond Artificial Intelligences in the sense of using them to achieve higher and more complex goals and outcomes than the pure and simple direct use of individual AI models.

The anthropological model is that of Homo Extensus, i.e., able to extend human intellectual faculties even through AI, but retaining conscious control.

The “Above AI” approach involves humans being able to control and use multiple Artificial Intelligences, thus not depending on a single model.
The image is released under a Creative Commons Attribution 4.0 International (CC BY 4.0) license. Work by Gualtiero and Roberto Carraro – Homo Extensus. Please quote the authors and link to the original page