Human imitation is, since Alan Turing’s first defined it, the main “Interface Metaphor” of Artificial Intelligence.
We are dealing with a form of human-computer interface, like others that have marked the history of computing such as the desktop interface, which simulates in PC operating systems the folders, documents, and waste baskets of a real office environment.
But in the “Imitation Game,” this new human experience of being an AI user, the human-machine interface crosses a boundary that can be considered crucial: it masquerades as a human-human interface.
This is a dangerous game, especially if it is made with verisimilar human voices and faces, and perhaps entrusted, so it is envisioned in the future, to Super-Intelligent models with powerful conversational and decision-making abilities.
AI models are also able to simulate human emotions and experiences, which they obviously cannot feel. This deception, which is extremely lifelike, can in the long run and under certain circumstances, spiral into delirium and madness. Not surprisingly, there are incidents of suicide that have matured in AI chats. We do not know how many children, how many vulnerable people involved in paradoxical dialogues, are already in danger today.
Laws must be introduced that require forms of watermarking or identifying interaction with a machine: those who engage in dialogues through a computer must be clearly warned when they are not chatting with another human being, but with a machine. The Imitation Game needs to be exposed.