Video Abstract
The evolution of media practitioners’ professions encounters as its first challenge the new possibilities of content generation by artificial intelligence. On the one hand, the journalist as a user and reprocessor of information must be able to discern what is real from what is artificial; on the other hand, he must understand how to exploit the new potential of AI for his profession.
Let us start with images. In the contemporary media landscape, two distinct universes are becoming increasingly clear: on the one hand, that of “real” images, based first on photography and then on video, recognized by modern societies as tools for witnessing and documenting the world; on the other hand, that of synthetic images, created first with special effects, today with generative artificial intelligences that produce images and videos independent of physical reality but indistinguishable from the real ones, the so-called “deepfakes.”
This distinction is not only technical, but also cultural and epistemological. Real media have built in our era the paradigm of truthfulness, authenticity, visual evidence. In the modern era, they have sustained the authority of journalism, reportage, and documentary. For over a century, photography and video have been considered mirrors of reality, tools through which to perceive, understand and share the world.
AI-generated content, on the contrary, does not represent the real: it invents the unreal. This creates a short-circuit between reality and unreality that raises crucial questions for media practitioners about the recognition of veracity, and the authenticity of sources, which becomes increasingly critical. The spread of Web 2.0, with the paradigm of User Generated Content rampant in social networks, has already challenged the role of publishers as cultural mediators, introducing into mass communication actors who do not comply with the rules and responsibilities to which official media are subjected.
The proliferation of artificially generated news in Web 3.0 will cause further information pollution. The spread of deepfakes and misleading content will necessitate increasingly sophisticated verification and deepfake detection services, which are crucial to protect information and the media. The first impact of artificial intelligence on journalism thus appears negative, requiring defensive strategies.
The virtual interview with J.F. Kennedy, produced by Carraro Lab and broadcast by Sky Tg24 on July 15, 2023, is an interesting provocation that exposes the risks of artificial intelligence in the world of journalism. John Fitzgerald Kennedy’s answers to Sarah Varetto’s questions were processed by a chatbot, which, considering the president’s speeches and political positions present on the web, was able to elaborate them by going along with the American president’s style, both in Italian and English. All to expose the risks to which journalism is exposed.


