Killer robot

From RationalWiki
Jump to navigation Jump to search
It just works
Technology
Icon Tech Portal.svg
Programming for Dummies
Were you looking for the killer rabbit?

A killer robot or autonomous weapon is a military technology where a weapon is able to select targets and decide when to open fire using artificial intelligence. Examples notionally include autonomous drones with weapons; automated gun turrets; autonomous tanks (self-driving cars with guns); autonomous boats; and actual humanoid robots that can stride across battlefields intelligently engaging with the enemy. They should be distinguished from machines that operate under direct human control (e.g. "remote piloted aircraft"[1]) because of their ability to operate for long periods without human intervention, to respond to their surroundings, and intelligently select the best way to fulfill a general goal.

Autonomous weapons offer the advantage of allowing a nation to fight wars without risking the lives of its own soldiers. They make it easier to perform dull tasks such as guarding a perimeter, that might otherwise be performed by minefields or bored and inattentive conscripts. They may be better or at least faster at identifying targets and then shooting them, making them particularly suitable for situations where a fast response is required, such as defending against missiles or warplanes. They may substitute for indiscriminate techniques such as carpet bombing or mining. And they allow battles to be fought in places where human soldiers cannot easily go, such as in space, in a post-nuclear wasteland, or underwater.

However, there are significant arguments over the ethics of allowing computers to decide whether or not to kill somebody. There are many documented cases where artificial intelligence has been shown to be racist.[2] This would be particularly problematic if your robot started randomly shooting black people. It is hard for a computer to distinguish between combatants and non-combatants, which requires human knowledge and judgment.[3] There is the question of who is responsible for war crimes committed by the robots, and how to prosecute war crimes.[3] Also, robot soldiers potentially encourage nations to wage war by reducing the human cost of starting a war.[3]

There are issues with cybersecurity: what if somebody hacked or illicitly took over control of a killer robot (although these issues also exist with conventional drones and remote-controlled robots), or uploaded new firmware and sent them on a killing spree? And would it encourage war and killing of less technologically-advanced human enemies? Or is it playing god?

As with many technological topics, the subject is heavily influenced by fictional depictions such as The Terminator films.

Current and proposed systems[edit]

Non-autonomous systems[edit]

Killer robots should be differentiated from existing systems where a human operator provides remote control and "pulls the trigger". This includes drone warfare, but also the case where Dallas police used a bomb-disposal robot to kill mass shooter Micah Johnson.[4][5]

In addition, artificial intelligence and other systems can be used to aid and augment human capabilities, e.g. identifying and tracking targets or assisting in flying a warplane. This is common in modern manned fighter jets, fire control systems on ships, etc. These systems are not autonomous because they still require human intervention to "pull the trigger".

Defensive systems[edit]

AI is already used in defensive systems, such as Phalanx CIWSWikipedia ship-borne systems, which need to be able to detect incoming missiles and respond very quickly: these systems can detect, track, aim, and fire automatically. It is not proposed that this application of AI should be banned.[6] Such systems are designed to target missiles and artillery, not people, although they can still cause human fatalities if they malfunction or incorrectly identify targets.[7]

Computer-controlled guns[edit]

The Kalashnikov Group in Russia is developing a machine gun that can decide when to shoot.[6]

Warships[edit]

The US Navy has shown an interest in autonomous warships. One aspect is the Sea Hunter submarine-hunter, a boat which can operate for 60-90 days without a human crew, travelling around looking for submarines; initially it is planned to have a human supervise but not directly steer or control.[8][9] China has also unveiled the D3000, an autonomous ship capable of operating for months.[10]

Autonomous drones[edit]

Existing military drones are flown like a remote-control plane by a controller who may be thousands of miles away: military sources call these "remote piloted aircraft".[1] Similarly drones sold to consumers offer either direct control or modes where limited autonomy is still under human control, such as hovering or "follow me" modes. Autonomous drones are tightly restricted or banned in most places, but can be used for non-lethal applications such as Amazon's famous parcel-delivery drones or their use by farmers to perform regular automated surveys of their land.[11] There are also non-lethal military applications such as overwhelming radar with hundreds of fake targets.[12]

Autonomous drones typically have a goal programmed but freedom in pursuing that goal, based on the ability to respond to circumstances and choose an appropriate course of action. The key to autonomous drones as effective weapons is the ability to select a target and choose to fire on it; existing systems can already track a chosen target and direct fire on it, but do not identify someone as hostile and choose to engage. But modern armed forces are now researching how the latter can be done.[12]

Proposed bans[edit]

While falling short of a ban, the US, China, Germany, Russia, and some other countries have in the past declared they have no intention of developing "fully autonomous combat systems".[13] However this declaration is already being rendered out of date by new technology.

The UN debated a ban in 2018 but failed to reach agreement.[14] The European Parliament voted for a full ban in 2018, but this doesn't restrict national governments in the EU.[14]

In March 2021, New York councilmember Ben Kallos proposed that the city's police should be banned from using armed robots. This was a response to the increased use of non-armed robots by the police, such as Boston Dynamics' Digidog, a four-legged robot which is used for surveillance, and a fear that this would soon develop to the use of robots fitted with weapons.[15] In April 2021, New York mayor Bill de Blasio called an end to trials of the Boston Dynamics robot, with a spokesman calling it "creepy, alienating, and sends the wrong message to New Yorkers." The robot dog had been opposed by Alexandria Ocasio-Cortez and other community activists who pointed to the resemblance between using robot dogs to police ethnic minorities and the use of actual dogs for threatening and controlling ethnic minorities (such as the use of white dogs against blacks and "slave dogs" in pre-abolition times). [16][17] Although it's interesting that the resemblance to a dog rather than a robot seems to have been the problem for many.

Warnings[edit]

Various bodies have warned about the risk. In 2015, the Future of Life Institute published an open letter on the dangers, signed by people including Elon Musk, Stephen Hawking, and Noam Chomsky.[13] Musk and 115 other artificial intelligence experts called for a total ban in a 2017 petition to the UN.[13]

The Campaign to Stop Killer Robots is a coalition led by Human Rights Watch calling for a ban on "fully autonomous weapons".[18][19]

Debates[edit]

A human decision maker[edit]

Often it is suggested that there is a fundamental difference between systems where a human makes the final decision to shoot (good) and those where a computer does it automatically according to an algorithm or AI model (bad).[12] There are cases where real-world knowledge, with a human's understanding of human culture and behavior, may let people make a judgment a computer could not do. However, this not always true, for a number of reasons:

  1. Target selection: if this is done automatically, there is vast potential for bias or error. The operator decides to shoot, but doesn't have any input into who the gun is pointing at.
  2. The human operator only sees what the computer sees, sitting thousands of miles away in front of a screen. The human making the decision doesn't have more data than an algorithm does. Night vision equipment, data corruption and error correction, image processing, and other methods of data enhancement may distort and deform, create artifacts, or otherwise make an innocent person look like a dirty terrorist. If the input is garbage, the decision will be garbage.
  3. People may make worse decisions than computers. When the USS Vincennes shot down Iran Air Flight 655, they had an Aegis Combat System which tracks incoming craft, makes measurements, and presents human operators with the choice to fire or not; but in that case, people were driven by bias or prejudice, ignored the data which clearly indicated it was a civilian airliner rather than a military jet, and shot it down anyway, assuming with a conspiracy-theory mindset that it was a warplane pretending to be a civilian jet.[20] That was a case where people reasoned incorrectly, but an alternative model is that of the computer user blindly clicking OK on every pop-up they see without reading or thinking.
  4. People don't pay attention. Experience with self-driving cars shows that a similar model, where a robot goes around while a human watches and catches occasional failures, is very dangerous because people do not pay sufficient attention and may not respond rapidly enough to correct a robot.
  5. The operator may not be to blame for mistakes, and accountability may be just as complex with a human operator as with a robot. There's a desire to have someone you can hold accountable, put on trial for war crimes, etc., but for all the reasons given above, in a technologically complex system where information is limited and heavily mediated, the fault may lie elsewhere and the operator may become a scapegoat or patsy.
  6. It doesn't make a system any more secure: you could still hack or maliciously take control of the weapon. Indeed, it may be easier for a lone individual to go rogue than for a complex piece of software.

Wilder ideas[edit]

Killer nanobot swarm[edit]

One suggestion is that a swarm of small drones or mini-robots could be released and they could go around killing everybody they come across, with potential for mass murder if used in civilian areas. Supposedly such technology could be mass produced or 3D printed making it accessible to terrorists.[6] This is possible with current technology, as is flying a drone attached to a bomb, or just crashing unarmed drones into people, although it's not clear that it's much easier than other methods of causing civilian death or disruption: bullets are cheaper than drones, and biological agents are less conspicuous.

See also[edit]

References[edit]

  1. 1.0 1.1 Autonomous military drones: no longer science fiction, Nato Review, 2017
  2. Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It, Joy Buolamwini, Time 7 February 2019
  3. 3.0 3.1 3.2 The Problem, The Campaign to Stop Killer Robots website, accessed 9 April 2019
  4. The Truth About Killer Robots: the year's most terrifying documentary, The Guardian, 6 Nov 2018
  5. See the Wikipedia article on 2016 shooting of Dallas police officers.
  6. 6.0 6.1 6.2 Don’t Let Robots Pull the Trigger, Scientific American, March 1, 2019
  7. See the Wikipedia article on Phalanx CIWS.
  8. The U.S. Navy Wants to Roll out Autonomous Killer Robot Ships, Futurism, Jan 16, 2019
  9. See the Wikipedia article on Sea Hunter.
  10. With the D3000, China enters the robotic warship arms race, Popular Science, Sep 25, 2017
  11. Autonomous drone vs self-flying drones, what’s the difference?, Drone Rush
  12. 12.0 12.1 12.2 Yikes. UK military looking into building 'fully autonomous' killer drone tech – report, The Register, 12 Nov 2018
  13. 13.0 13.1 13.2 The Threat of Killer Robots, Vasily Sychev, Unesco Courier, Unesco, March 2018
  14. 14.0 14.1 MEPs vote to ban 'killer robots' on battlefield, BBC, 12 Sep 2018
  15. New York lawmaker wants to ban police use of armed robots, Ars Technical, March 21, 2021
  16. New York mayor calls off ‘creepy, alienating’ police robo-dog, The Guardian, 30 April 2021
  17. Slave Hounds and Abolition in the Americas, Tyler D Parry, Charlton W Yingling, Past & Present, Volume 246, Issue 1, February 2020, Pages 69–108, https://doi.org/10.1093/pastj/gtz020
  18. The Campaign to Stop Killer Robots website, accessed 9 April 2019
  19. Killer Robots, Human Rights Watch, accessed 9 April 2019
  20. See the Wikipedia article on Iran Air Flight 655.