Artificial intelligence

From RationalWiki
Jump to navigation Jump to search
Poetry of reality
Science
Icon science.svg
We must know.
We will know.
A view from the
shoulders of giants.
Thinking hardly
or hardly thinking?

Philosophy
Icon philosophy.svg
Major trains of thought
The good, the bad,
and the brain fart
Come to think of it

Artificial intelligence (AI) refers to a device (or program) which perceives and responds to its environment and in that way is an "intelligent agent".[note 1] The history of the field of AI research spans great dreams, and more than half a century of failure to realize "strong AI" (an artificial mind of more general capability).[note 2] Eventually, research and development refocused on the productive goal of making systems with some specific skill each, like analyzing data to classify objects, navigating robotic equipment, strategic decision-making within the framework of a particular "game" (e.g. chess) or situation, etc. In the early 2020s, the chatbot (earlier mostly a toy) was reinvented on the foundation of the large language model, producing something with a greater range of skills, yet still far from the old dream of strong AI.

Older AI ambitions were often rather anthropocentric, e.g. the striving for an artificial brainWikipedia which may replace the human organ. Anthropocentric thinking also underlies the old Turing test proposed by Alan Turing for evaluating a device as intelligent: (roughly) If a conversation with the device cannot be differentiated from a similar conversation with a human being then the device can be called intelligent. Such a test is based on the response of an audience to a performance, similarly to how an audience may respond to a stage magician, and as such tests the discernment of the audience more than the device.

Strong AI[edit]

Artificial general intelligence (AGI), also called strong AI,[1][2] is a hypothetical type of artificial intelligence that would not be limited to a specific task (as opposed to current AI, which is called specialized or "weak"), but would rather possess a general capability of thinking, learning and improving much like an organic mind (though not necessarily working like an organic brain). It differs from a collection of specialized weak AIs, which is what e.g. Amazon Alexa and Siri are; you can include as many such specialized tools as you want in a package, analogously to the parts of a Swiss army knife, without it resulting in the ability to develop new tools in a self-directed way emerging.

With the 2020s generative AI boom and systems like the large language model ChatGPT, confusion about the meaning of strong AI or AGI has spread, allegedly deliberately on the part of some AI vendors, who hope to be able to more easily claim having achieved AGI.[3]. For example, AGI has at times been defined by ChatGPT maker OpenAI as an AI surpassing humans in a majority of economically valuable tasks – so that if it sells and can profitably replace many types of human workers, it counts as AGI. This need not imply a fluid intelligence, and could hypothetically be achieved through the machine learning version of large-scale rote memorization and brute force – particularly if commercial use values quantity and superhuman speed above quality and originality in arts and intellectual crafts where applicable.

The older, more established AGI idea of a "full artificial mind" (and often in addition some kind of artificial body accompanying it) is a basis of the transhumanism and singularity lore. It is also commonly portrayed in science fiction in diverse ways. Without it, neither will the wondrous things transhumanists often expect to happen in the future be possible, nor will the fearsome cybernetic revolts (robot uprisings and AI doomsday scenarios) they sometimes warn about and many sci-fi works include.

Its possibility[edit]

Despite immense amounts of money, research, and a broad range of specialized or "weak" AI products having been created, a general artificial intelligence — a sentient computer, capable of initiative, general reasoning, and seamless human interaction — has yet to come to fruition. (Some argue that a sentient computer might be more appropriately referred to as artificial consciousness than artificial intelligence.) The boom in generative AI (a sub-type of weak AI) however pushes stochastic machine imitation to the limits,[note 3] showing that larger quantity in model size and training data leads to broader canned skills. This leads to new debates about how to distinguish the presence or absence of intelligence, and the relation of such canned skills to general intelligence, with skepticism expressed by a broader range of researchers including in cognitive science.[4]

In 2023, some researchers argued that the most powerful LLMs already constituted AGI.[5] Yet, as of 2024 their claims that LLMs can be competent at nearly any human information task is still very much lacking in evidence, as LLMs have trouble dealing with questions such as what the word "it" in a sentence refers to, and otherwise evince lack of abstract thought. It may be that, outside of the points made by such "LLMs lead to AGI" debaters, there are other large differences in understanding between them and their critics than the questions about the AI itself, namely questions about human intelligence and intelligence in general and how it works. It wouldn't be the first time such a gap in understanding leads to failing to deliver on the AGI front after AI researchers assumed they were almost already there.

Hubert Dreyfus's critique of artificial intelligence research,Wikipedia made back in the era when AI research tried to create AGI through symbol manipulation systems, has been especially enduring.[6][7] It does not explicitly deny the possibility of strong AI, but it asserted that the fundamental assumptions of AI researchers at the time were either baseless or misguided. Because Dreyfus's critique draws on philosophers such as Heidegger and Merleau-Ponty, it was largely ignored (and lampooned) at the time of its arrival. However, as the fantastic predictions of early AI researchers continually failed to pan out (which included the solution to all philosophical problems), his critique has largely been vindicated, and even incorporated into modern AI research. But this has arguably only happened piecemeal, problem by problem, and in response to the problems rather than in response to Dreyfus.[8]

There's also those who question more categorically whether or not a computer can even qualify in principle. John Searle proposed his "Chinese room" thought experiment to demonstrate that a computer program merely shuffles symbols around according to simple rules of syntax, but no semantic grasp of what the symbols really mean is obtained by the program.[9] Proponents of "strong AI", who believe an awareness can exist within a purely algorithmic process, have put forward various critiques of Searle's argument. Ultimately, Searle's argument seems inconclusive.

There are also woo objections to the possibility of strong AI, or objections at any rate on unfalsifiable grounds. These can for example be religious, based on ideas of quantum consciousness, or some idea about biology (or maybe humanity in particular) being special. Some LLM and AGI hypers tend to lump all critics into this category. Our take is that it's sensible to stay open to future evidence and potential demonstrations of AGI capabilities.

Transhumanist dreams[edit]

Dreams of replacing the human brain, with a device equal to or greater in capacity than it, are central to transhumanism, a staple of science fiction, and have long accompanied ideas of strong AI. Much like strong AI in general, such a thing may or may not be possible with future technology, going by what is known today. This is in contrast to functions separate from the need for strong AI, e.g. prosthetic limbs and implants related to sensory processing, where some types are known to be possible or are even in use by people.

Brains and cognition are not currently well understood, and the scale of computation for an artificial brain is unknown. The power consumption of computers however leads to speculation that for an artificial brain, it would have to be orders of magnitude greater than its biological equivalent. The human brain consumes about 20 W of power (and most of it seems used just to keep it permanently up and running, plus basically energy being uselessly leaked away[10]) whereas current supercomputers may use as much as 1 MW or an order of 100,000 more, suggesting AI may be a staggeringly energy-inefficient form of intelligence. Critics of brain simulationWikipedia believe that artificial intelligence can be modeled without imitating nature, using the analogy of early attempts to construct flying machines modeled after birds.[11][12]

An artificial brainWikipedia would not fall under the current biological definition of life any more than a kidney dialysis machine. An example of a fictional character with this kind of prosthetic is CyborgWikipedia from the Teen TitansWikipedia comics.

Machine learning[edit]

See the main article on this topic: Machine learning

In the field of artificial intelligence, machine learning is a set of techniques that make it possible to train a computer model so that it behaves according to some given sample inputs and expected outputs. For example, machine learning can recognize objects in images or perform other complex tasks that would be too complicated to be described with traditional procedural code.

Large language models are neural networks for language modeling that are large (at least tens of millions of "parameters", or artificial neurons). These include ChatGPT, which sparked a great, renewed interest in chatbots, reinvented to be based on LLMs.

Risks of AI[edit]

Some science fiction has highlighted the risk of an AI takeover of human society. Most risks of this type are unrealistic and, even if they weren't, do not have relevance in today's society for us to worry about them (though Eliezer Yudkowsky disagrees and will not tire of letting people know it).

However, there are also many more prosaic potential downsides of AI that may necessitate cautious use of the technology, changes in regulations, or political action – way before hypothetical future AI technology reaches Terminator-like levels of general intelligence. These include:

  • Proliferation of junk information, harmful messages, or works infringing on rights. From spam, to plagiarism and forgeries, to psychological warfare, various machine learning approaches can be used to produce either remarkable quality, quantity, or even both, of bogus information of some chosen kind – text, images, audio, videos, etc. Deepfakes are fabrications made to have convincing qualities, e.g. using a generative adversarial networkWikipedia (GAN). In terms of quantity, LLMs like ChatGPT led to new fears of both mass plagiarism and automated disinformation sweeping the world.
    • AIs programmed to learn from any internet users who happen to interact with them, picking up racism, sexism, and misinformation, and parroting it – this has happened with older types of chatbot.[13] Similarly, generative AIs like LLMs soak up expressions of bigotry, and other biases and flaws in its data set (sourced in large part from the web) when the model is trained, and imitates it. While work can be done on the model to reduce the issue, large models are inscrutable and all problems can't reasonably be weeded out.[14]
    • Copyright-related issues to art product of generative AIs, both because of them using for training extensive datasets that include copyright works without asking the artists, and related to who owns the copyright of a work generated such way[15][note 4]
  • AIs used by social-media systems and platforms like for example YouTube creating filter bubbles – potentially inadvertently increasing political polarisation and extremism.
  • Successive waves of technological unemployment.Wikipedia Self-driving cars and trucks may eventually take over the work of professional drivers.[note 5] Various white collar workers who handle boilerplate text, or chat with people, may be replaced by chatbots.[note 6] Online "influencers" can be replaced by generative AI, creating artificial social media personalities.[16] Business leaders also dream of replacing software developers with chatbots, though without AGI this cannot go far.[note 7] New jobs involving the use and supervision of AI tools can be expected, but how many? No one can yet be sure how employment will be affected on the whole. Some, notably including 2020 US Presidential candidate Andrew Yang, have advocated a universal basic income to act as a buffer against technological unemployment, though others have argued instead for a return to the twentieth-century idea of government full-employment programs.
  • Some Tesla fans have theorised that self-driving cars will in the not-too-distant-future also mean that new cars become unaffordable for all but the very wealthy, as car manufacturers focus on selling highly-profitable and expensive autonomous vehicles to Uber and Lyft, or simply start their own autonomous taxi operations (as Tesla plans to do).
  • AI algorithms inadvertently making racist or sexist decisions about matters such as mortgages and other loans, or even about criminal-justice matters such as crime detection, sifting through large amounts of evidence, bail or probation decisions – this has also already happened.[17]
  • Black-box AIs making decisions that affect people, but whose reasoning is completely opaque and essentially undiscoverable by customers, judges and juries, and even by the organisations that own them.
    • The logical implication is that these algorithms could be hacked for pecuniary advantage, or to literally "get out of jail free", and possibly no-one would even notice...
    • Also, combine this with the tendency of some politicians and bureaucrats with little understanding of technology to simply say "computer says no" when confronted with disagreements about computer-generated decisions, even in the absence of any machine intelligence at all, and this could be a recipe for special interests to do "regulatory capture" in a whole new way.
  • Environmental concerns, given how energy-intensive are generative AIs even for trivial tasks as chatting or generating images, especially as artificial intelligence is more adopted[18][19].
  • Most disturbingly of all, flying drones controlled by autonomous AIs could be used by rogue states or terrorist groups (or even by ordinary states in war scenarios) to injure or assassinate individuals, or even to target large groups of people with pinpoint accuracy, like political opponents of an authoritarian leader, without it necessarily being traceable back to the people giving the orders. This dystopian scenario was vividly explored in a disturbing video titled Slaughterbots, produced by the campaign to stop killer robots (yes, they are actually called that).

Stephen Hawking's view[edit]

In a humorous interview with John Oliver, Stephen Hawking referenced AI as potentially dangerous. His final reddit AMA was also almost entirely focused on his views on AI.[20]

See also[edit]

Icon fun.svg For those of you in the mood, RationalWiki has a fun article about Artificial intelligence.
Icon fun.svg For those of you in the mood, RationalWiki has a fun article about AI.
Icon fun.svg For those of you in the mood, RationalWiki has a fun article about Artificial stupidity.

Want to read this in another language?[edit]

Se você procura pelo artigo em Português, ver Inteligência artificial.


Further reading[edit]

  • George Johnson, Machinery of Mind: Inside the New Science of Artificial Intelligence. Time Books, 1986. ISBN 0812912292
  • Roger Penrose, The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics. Oxford University Press, 1989. ISBN 0198519737
  • Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books, 1999. ISBN 0465026567

External links[edit]

Notes[edit]

  1. The concept of intelligent agentWikipedia is very general, as it ranges from the thermostat and other things involving some regulatory mechanism, however simple, to human beings and other potential minds we may consider intelligent, as in actually having intelligence.
  2. Strong AI has been dreamed of and vaguely imagined to be possible throughout the history of the electronic computer.
  3. The idea of a probabilistic imitation machine that parrots things with random variation without understanding it has been termed the stochastic parrot.
  4. See, for example, this AI-generated picture of the god Cernunnos and that garbled watermark at bottom left
  5. Self-driving vehicle technology is slowly improving, though as of the early 2020s the hype remains overly optimistic about the rate of progress.
  6. In roles of technical or work support, psychological counseling, etc., LLM chatbots are poor substitutes for skilled human guidance, but they're cheaper to operate, making the transition profitable.
  7. Human developers may use LLM assistants to speed up some tasks, but can't generally be replaced by LLMs for the thinking involved in design, innovation, security expertise, to name some key areas.

References[edit]

  1. Ray Kurzweil, Long Live AI. Forbes, 15 August 2005. Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."
  2. Advanced Human Intelligence. Responsible Nanotechnology, 10 August 2005.
  3. "Why everyone seems to disagree on how to define Artificial General Intelligence", Mark Sullivan, Fast Company, 18 October 2023
  4. Clark, Lindsay (4 July 2023). "Artificial General Intelligence remains a distant dream despite LLM boom" (in en). The Register. 
  5. "Artificial General Intelligence Is Already Here", Blaise Agüera y Arcas and Peter Norvig, October 10, 2023
  6. Setargew Kenaw, Hubert L. Dreyfus’s Critique of Classical AI and its Rationalist Assumptions. Minds & Machines 18, 227–238 (2008).
  7. Hubert Dreyfus, What Computers Can't Do. MIT Press, 1972. ISBN 978-0-06-090613-9
  8. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence (2nd ed.). Natick, Mass.: A.K. Peters. 2004. ISBN 1-56881-205-1. 
  9. John Searle, Minds, brains, and programs. The Behavioral and Brain Sciences, Cambridge University Press, 1980.

    "Could a machine think?" On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.

  10. We finally know why the brain uses so much energy
  11. Goertzel, Ben (December 2007). "Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil". Artificial Intelligence 171 (18, Special Review Issue): 1161–1173. Retrieved April 1, 2009. 
  12. Fox and Hayes quoted in Nilsson, Nils (1998), Artificial Intelligence: A New Synthesis, p581 Morgan Kaufmann Publishers
  13. Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism
  14. Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (2021-03-01). "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT '21 (New York, NY, USA: Association for Computing Machinery): 610–623. doiWikipedia:10.1145/3442188.3445922. ISBN 978-1-4503-8309-7. 
  15. Who owns AI art?
  16. AI-created “virtual influencers” are stealing business from humans, Christina Criddle, Financial Times (on Ars Technica), 2023-12-29
  17. Rise of the racist robots – how AI is learning all our worst impulses
  18. The AI Boom Could Use a Shocking Amount of Electricity
  19. Generating Just a Few AI Images Consumes As Much Energy As Charging Your Smartphone
  20. Stephen Hawking on Reddit, 2015.