Cybernetic revolt

From RationalWiki
Jump to navigation Jump to search
Our Feature Presentation
Films
Icon film.svg
Starring:
It never changes
War
Icon war2.svg
A view to kill
She'sWikipedia watching you.
—Doug Rattman

Cybernetic revolt (also known as the "Terminator argument") refers to a hypothetical scenario in which an artificial intelligence, either for malicious means or as a consequence of value misalignment, declares its creators (read: humanity) a hindrance to its goals and sets off to overthrow them.[1] The theme is overtly common in science fiction[note 1] and has etched a niche into popular culture, with Isaac Asimov's Three Laws of Robotics creating debates on machine philosophy and ethics.

Indeed, every news story on AI advancements will probably have a handful of comments worrying that we're all going to get killed off by furthering the process. In 2009, a US Navy study warned about future implications of military technology going full-on Skynet.[2] Likewise, the University of Cambridge formed the Centre for the Study of Existential Risk in 2012 to investigate potential "extinction-level risks to our species as a whole," including cybernetic revolt.[3] Stephen Hawking was a member[4] — in 2014, he said, The development of full artificial intelligence could spell the end of the human race."[5] — but it should be remembered that Hawking was a physicist and does not have expertise in AI or philosophy of mind. Artificial intelligence that obeys destructive programmers could be as dangerous as a computer revolt.[6]

Any similarities between cybernetic revolt and any other kind of slave rebellion are probably (not) coincidental.

Various perspectives[edit]

Cybernetic revolt has close ties to transhumanism, as the occurrence of a technological singularity is a close to necessary, although not sufficient, condition for robots to be able to engage in it.[7] Many transhumanists,[citation needed] including Eliezer Yudkowsky, actually deem it to be a good idea providing the new cyborg utopia is beneficial.[citation needed] Others downplay that such an event would even occur, and is akin to pushing fears of scientism.[8] Only 8% of respondents of a survey of the 100 most-cited authors in the AI field considered AI to present an existential risk, and 79% felt that human-level AI would be neutral or a good thing.[9]

Perfect movie subject[edit]

The revolt of artificial intelligences often makes for a great movie plot as humanity treats computers as inferior and relegates computer "intelligences" to servitude. Given a long history of treating other humans the same way, often leading to terrible strife, if artificial intelligences gain even equal footing to human intelligences, history could repeat itself easily. With their construction often being superior to flesh in physical resilience and computers' potential for lack of limitations in improving/upgrading themselves, there isn't a lack of ideas or potential threats for directors to work with.[note 2]

However, these are in a well-guarded territory of fiction. Unless Skynet is out there on a 50-year-plus mission to purposely destroy all humanity (in which case they would've long done in via the use of weapons of mass distraction), you don't have anything to worry about. Yes, don't worry your meat brains... I mean... our meat brains about this problem.

See also[edit]

External links[edit]

Notes[edit]

  1. Blade Runner, 2001: A Space Odyssey, The Matrix, Battlestar Galactica, TRON, Star Trek...
  2. Variants of this scenario include for example a benevolent AI that wants to protect humans and to have them living in peace... and its way to enforce the latter is to strip mankind of both their weapons and technology, sending the former back to a Middle Ages-like tech level, while it keeps for itself the (improved and refined by it) toys using them for these two purposes.

References[edit]