Isaac Asimov enunciated the three famous laws of robotics, which govern the relationship between humans and robots under the banner of protecting the former:

  1. A robot cannot harm a human being, nor can it allow a human being to receive harm because of its failure to do so.
  2. A robot must obey orders given by humans, as long as those orders do not conflict with the First Law.
  3. A robot must protect its own existence, as long as safeguarding it does not conflict with the First or Second Law.”

The wide application of Artificial Intelligence in the military has largely violated these laws, creating exactly “killer robots” aimed at killing or harming humans in battle.

Not even the debate on “Autonomous Lethal Weapons,” or autonomous weapons that are also used against people identified as military targets, has failed to achieve a moratorium on AI-powered weapons, which on the contrary have become increasingly effective in warfare scenarios.

Pope Francis noted that “the possibility of conducting military operations through remote control systems has led to less perception of the devastation they cause and the responsibility for their use, contributing to an even colder and more detached approach to the immense tragedy of war.”

This is another of the now numerous issues on the use of AI that involve and indeed prompt deep reflection on the part of humans.