Artificial intelligence

From RationalWiki
(Redirected from AI)
Jump to navigation Jump to search
Warning icon orange.svg This page contains too many unsourced statements and needs to be improved.

Artificial intelligence could use some help. Please research the article's assertions. Whatever is credible should be sourced, and what is not should be removed.

Poetry of reality
Science
Icon science.svg
We must know.
We will know.
A view from the
shoulders of giants.
Thinking hardly
or hardly thinking?

Philosophy
Icon philosophy.svg
Major trains of thought
The good, the bad,
and the brain fart
Come to think of it

Artificial intelligence (AI) refers to a device (or program) which perceives and responds to its environment and in that way is an "intelligent agent". The history of the field of AI research is in large part one of great dreams, hype, and failure to construct the "strong AIs" (artificial minds of more general capability) imagined to be possible throughout the history of the electronic computer. Eventually, research and development refocused on the much more practical and realizable goals of producing systems which each have or can learn some specific skill with proficiency, analyzing data to classify objects, to make robotic equipment navigate, to make strategic decisions within the framework of a particular "game" (e.g. chess) or situation, etc.

Older AI ambitions were often rather anthropocentric, e.g. the striving for an artificial brainWikipedia which may replace the human organ. Anthropocentric thinking also underlies the old Turing test proposed by Alan Turing for evaluating a device as intelligent: (roughly) If a conversation with the device cannot be differentiated from a similar conversation with a human being then the device can be called intelligent. Such a test is based on the response of an audience to a performance, similarly to how an audience may respond to a stage magician, and as such tests the discernment of the audience more than the device.

Strong AI[edit]

Artificial general intelligence (AGI), also called strong AI,[1][2] is a hypothetical type of artificial intelligence that would not be limited to a specific task (as opposed to current AI, which is called specialized or "weak"), but would rather possess a general capability of thinking, learning and improving much like an organic mind (though not necessarily working like an organic brain). It differs from a collection of specialized weak AIs, which is what e.g. Amazon Alexa and Siri are; you can include as many such specialized tools as you want in a package, analogously to the parts of a Swiss army knife, without it resulting in the ability to develop new tools in a self-directed way emerging.

The AGI idea of a "full artificial mind" (and often some kind of artificial body accompanying it) is a basis of the transhumanism and singularity lore. It is also commonly portrayed in science fiction in diverse ways. Without it, neither will the wondrous things transhumanists often expect to happen in the future be possible, nor will the fearsome cybernetic revolts (robot uprisings and AI doomsday scenarios) they sometimes warn about and many sci-fi works include.

Despite immense amounts of money, research, and a broad range of specialized or "weak" AI products having been created, a general artificial intelligence — a sentient computer, capable of initiative and seamless human interaction — has yet to come to fruition. (Some argue that a sentient computer might be more appropriately referred to as artificial consciousness than artificial intelligence.)

John Searle proposed his "Chinese room" thought experiment to demonstrate that a computer program merely shuffles symbols around according to simple rules of syntax, but no semantic grasp of what the symbols really mean is obtained by the program.[3] Proponents of "strong AI", who believe an awareness can exist within a purely algorithmic process, have put forward various critiques of Searle's argument.

Hubert Dreyfus's critique of artificial intelligence research has been especially enduring.[4][5] However, it does not explicitly deny the possibility of strong AI, it merely asserts that the fundamental assumptions of AI researchers are either baseless or misguided. Because Dreyfus's critique draws on philosophers such as Heidegger and Merleau-Ponty, it was largely ignored (and lampooned) at the time of its arrival. However, as the fantastic predictions of early AI researchers continually failed to pan out (which included the solution to all philosophical problems), his critique has largely been vindicated, and even incorporated into modern AI research.

Transhumanist dreams[edit]

Dreams of replacing the human brain, with a device equal to or greater in capacity than it, are central to transhumanism, a staple of science fiction, and have long accompanied ideas of strong AI. Much like strong AI in general, such a thing may or may not be possible with future technology, going by what is known today. This is in contrast to functions separate from the need for strong AI, e.g. prosthetic limbs and implants related to sensory processing, where some types are known to be possible or are even in use by people.

Brains and cognition are not currently well understood, and the scale of computation for an artificial brain is unknown. The power consumption of computers however leads to speculation that for an artificial brain, it would have to be orders of magnitude greater than its biological equivalent. The human brain consumes about 20 W of power whereas current supercomputers may use as much as 1 MW or an order of 100,000 more, suggesting AI may be a staggeringly energy-inefficient form of intelligence. Critics of brain simulationWikipedia believe that artificial intelligence can be modeled without imitating nature, using the analogy of early attempts to construct flying machines modeled after birds.[6][7]

An artificial brainWikipedia would not fall under the current biological definition of life any more than a kidney dialysis machine. An example of a fictional character with this kind of prosthetic is CyborgWikipedia from the Teen TitansWikipedia comics.

Machine learning[edit]

See the main article on this topic: Machine learning

In the field of artificial intelligence, machine learning is a set of techniques that make it possible to train a computer model so that it behaves according to some given sample inputs and expected outputs. For example, machine learning can recognize objects in images or perform other complex tasks that would be too complicated to be described with traditional procedural code.

Risks of AI[edit]

In science fiction, much has been made of the risk of an AI takeover of human civilisation. Most risks of this type are unrealistic and, even if they weren't, do not have relevance in today's society for us to worry about them (though Eliezer Yudkowsky disagrees and will not tire of letting people know it).

However, there are also many more prosaic potential downsides of AI that may necessitate cautious use of the technology, changes in regulations, or political action, way before AIs reach Terminator-like levels of general intelligence. These include:

  • AIs programmed to learn from any internet users who happen to interact with it, picking up racism and sexism and thinking it is cool and funny - this one has already happened.[8]
  • AIs used by social media companies and YouTube creating filter bubbles - potentially inadvertently increasing political polarisation and extremism
  • Successive waves of technological unemploymentWikipedia - one of the first of which, at least, according to Elon Musk, will be self-driving cars and trucks putting truck drivers out of work en masse. Will they all be able to get jobs as software developers or social media consultants? Will new jobs that don't exist today arise, so that they will all have jobs again eventually? No-one can really be sure, and some, notably including 2020 US Presidential candidate Andrew Yang, have advocated a universal basic income to act as a buffer against technological unemployment, though others have argued instead for a return to the twentieth-century idea of government full employment programs.
  • Some Tesla fans have theorised that self-driving cars will in the not-too-distant-future also mean that new cars become unaffordable for all but the very wealthy, as car manufacturers focus on selling highly-profitable and expensive autonomous vehicles to Uber and Lyft, or simply start their own autonomous taxi operations, as Tesla plans to
  • AI algorithms inadvertently making racist or sexist decisions about matters such as mortgages and other loans, or even criminal justice matters such as crime detection, sifting through large amounts of evidence, bail or probation decisions - this has also already happened.[9]
  • Black-box AIs making decisions that affect people's lives, but whose reasoning is completely opaque and essentially undiscoverable by customers, judges and juries, and even the businesses that own them.
    • The logical implication is that these algorithms could be hacked for their own pecuniary advantage, or to literally "get out of jail free", and possibly no-one would even notice...
    • Also, combine this with the tendency of some politicians and bureaucrats with little understanding of technology to simply say "computer says no" when confronted with disagreements about computer-generated decisions, even in the absence of any machine intelligence at all, and this could be a recipe for special interests to do "regulatory capture" in a whole new way
  • Most disturbingly of all, flying drones controlled by autonomous AIs could be used by rogue states or terrorist groups, or even by ordinary states in war scenarios, to injure or assassinate individuals, or even to target large groups of people with pinpoint accuracy, like political opponents of an authoritarian leader, without it necessarily being traceable back to the leaders giving the orders. This dystopian scenario was vividly explored in a disturbing video titled Slaughterbots, produced by the campaign to stop killer robots (yes, they are actually called that).

Stephen Hawking's view[edit]

In a humorous interview with John Oliver, Stephen Hawking referenced AI as potentially dangerous. His final reddit AMA was also almost entirely focused on his views on AI.[10]

Further reading[edit]

  • George Johnson, Machinery of Mind: Inside the New Science of Artificial Intelligence. Time Books, 1986. ISBN 0812912292
  • Roger Penrose, The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics. Oxford University Press, 1989. ISBN 0198519737
  • Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books, 1999. ISBN 0465026567

See also[edit]

Icon fun.svg For those of you in the mood, RationalWiki has a fun article about Artificial intelligence.
Icon fun.svg For those of you in the mood, RationalWiki has a fun article about Artificial stupidity.
Se você procura pelo artigo em Português, ver Inteligência artificial.


External links[edit]

References[edit]

  1. Ray Kurzweil, Long Live AI. Forbes, 15 August 2005. Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."
  2. Advanced Human Intelligence. Responsible Nanotechnology, 10 August 2005.
  3. John Searle, Minds, brains, and programs. The Behavioral and Brain Sciences, Cambridge University Press, 1980.

    "Could a machine think?" On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.

  4. Setargew Kenaw, Hubert L. Dreyfus’s Critique of Classical AI and its Rationalist Assumptions. Minds & Machines 18, 227–238 (2008).
  5. Hubert Dreyfus, What Computers Can't Do. MIT Press, 1972. ISBN 978-0-06-090613-9
  6. Goertzel, Ben (December 2007). "Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil". Artificial Intelligence 171 (18, Special Review Issue): 1161–1173. Retrieved April 1, 2009. 
  7. Fox and Hayes quoted in Nilsson, Nils (1998), Artificial Intelligence: A New Synthesis, p581 Morgan Kaufmann Publishers
  8. Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism
  9. Rise of the racist robots – how AI is learning all our worst impulses
  10. Stephen Hawking on Reddit, 2015.