Isaac Asimov wrote the three famous laws of robotics, which govern the relationship between humans and robots under the banner of protecting the former:

  1. A robot cannot harm a human being, nor can it allow a human being to come to harm because of its failure to act.
  2. A robot must obey orders given by humans, as long as those orders do not conflict with the First Law.
  3. A robot must protect its own existence, as long as safeguarding it does not conflict with the First or Second Law.”

The wide application of Artificial Intelligence in the military has largely violated these laws, creating “killer robots” aimed at eliminating or harming humans in battle.

Not even the debate on “Autonomous Lethal Weapons,” or autonomous weapons that are used against people identified as military targets, has failed to achieve a moratorium on AI-powered weapons, which on the contrary have become increasingly effective in warfare scenarios.

Pope Francis noted that “the possibility of conducting military operations through remote control systems has led to less perception of the devastation they cause and the responsibility for their use, contributing to an even colder and more detached approach to the immense tragedy of war.”

This is another of the now numerous issues on the use of AI that involve and indeed prompt deep reflection on the part of humans.