Talk:Artificial intelligence

From RationalWiki
Jump to navigation Jump to search
Icon science.svg

This science related article has been awarded BRONZE status for quality. It's getting there, but could be better with improvement. See RationalWiki:Article rating for more information.

Copperbrain.png

Archives for this talk page: , (new)


Google[edit]

Should there be some mention of [1]? (other sources available) Anna Livia (talk) 22:42, 5 July 2022 (UTC)

Will add [2]. Anna Livia (talk) 12:54, 23 July 2022 (UTC)
It could go with some more general discussion of the problems of people not being so discerning about what is and is not intelligent (i.e. easy to fool) and also (maybe along with it) chat programs falsely being promoted as intelligent. Maybe a section about good old chatbots and what they are and are not. Here's another article about people being confused about chatbots.[3] (But your second link, I didn't understand.) --ApooftGnegiol (talk) 18:51, 24 July 2022 (UTC)
Put in a better link (if not the original one) - there is a case for 'multiple clipboards (for use by non-experts) on occasion. Anna Livia (talk) 12:34, 28 July 2022 (UTC)

Applying a little skepticism to testing for intelligence[edit]

If a stage magician does something that looks superhuman, does this mean that something powerful beyond what is currently known has been observed, or simply that the limits of your powers of observation and judgment have been overcome?

If a computer program writes something that seems human, does this mean that an intelligent being is writing it, or simply that the limits of your powers of observation and judgment have been overcome?

By the same logic that the Turing Test is accepted by many, the first case should also be accepted. You should then accept as genuinely real anything that a stage magician is able to pull off in such a way that people can't account for it and are impressed. It is plainly a stunning lack of the most simple skeptical thinking (in one particular area) that leads people to accept the Turing Test as basically sensible at all.

This does not exclude the possibility that genuinely intelligent artificial beings could be created, nor the possibility that, say, a stage magician could acquire some fancy new technology that is able to create an unusual impression.

I think this article is not much of a skeptical article if it doesn't bring this basic issue up. And also the related issue of anthropocentric thinking at the core of the Turing Test. The ability to convince humans of humanness is not only not a good metric for intelligence, but also one that would plausibly lead to the rejection of the intelligence of most potential non-human forms of intelligent life that may exist, were humans to come into contact with them.

The article, at least the introduction, is also currently a mess because it juxtaposes different ideas labeled AI without making clear what relates to what how. --ApooftGnegiol (talk) 23:39, 23 July 2022 (UTC)

The Turing Test - though widely touted - is absolutely terrible. For the reasons you state.
It does not test the thing which proports to be intelligent; rather it tests the people looking at the test. The less knowledgeable the audience is the more likely the program is to pass the test.Bob"Life is short and (insert adjective)" 06:53, 24 July 2022 (UTC)
Could one say - the Turing test is of its time (and, partly, of the then current SF concerning AI), so a more modern version is needed? (And where would the chatbot that was switched off because it picked up on the wrong sort of views uttered by those in the discussion group fit?) Anna Livia (talk) 12:39, 28 July 2022 (UTC)

Power consumption[edit]

So what time period does the wattage/megawattage refer to?

And for another example of 'the sky is falling' prognostication will mention [4] (the obvious typo - 'pus' rather than 'puts' does somewhat spoil the argument). Anna Livia (talk) 20:10, 16 September 2022 (UTC)

Updates, is it shaping up?[edit]

There's more sources and text now than when the article was tagged as lacking them in October 2022. There's less confusion in the text too, though I'm not sure whether or not every detail is accurate. Also, I very recently expanded some to cover 2023's LLM developments and its relation to AGI. It's still not great, but maybe it's not at "significantly problematic" level anymore. Thoughts? --ApooftGnegiol (talk) 19:15, 27 January 2024 (UTC)

With @Panzerfaust's and my own edits, I think the "risks" section is now basically reasonable (though could be turned into more than a list-shaped section). I've now set the level to simply no brainstar. --ApooftGnegiol (talk) 23:47, 2 February 2024 (UTC)

Is this useful for our article?[edit]

https://www.sagaftra.org/videogamestrike

Several videogame voice actors are, apparently, going on a strike due to AI protections. Arcadium Trancefer (talk) 21:48, 30 July 2024 (UTC)

What is intelligence?[edit]

>"would rather possess a general capability of thinking, learning, and improving much like an organic mind (though not necessarily working like an organic brain)."

So, something that is like a human mind, but is simultaneously unlike a human mind? How do you define "a general capability of thinking, learning, and improving" outside of the context of an organic human brain? What does it mean to think or to learn? What is intelligence? Is intelligence some kind of universal attribute that exists to varying degrees in different lifeforms/machines, or is intelligence an entirely anthropocentric concept based entirely on our own human perceptions and biases? If we encountered an alien lifeform in outer space, how would we know if it is intelligent? Why do we consider certain animals to be intelligent? As far as I can tell, our definition of "intelligence" is little more than a synonym for "human-like". If an animal engages in human-like behavior, such as altering its environment or using tools or responding to human language, then we consider it to be intelligent. If an alien lifeform builds cities and spaceships and explores the universe like we do, then we consider it to be intelligent. And when we start talking about "higher intelligence" or "superhuman intelligence" then we start getting into real trouble, searching for a "superior" version of the human mind based on some arbitrary set of human virtues - in other words, searching for God. It is for this reason that I think that "artificial general intelligence" is little more than an ill-defined quasi-religious pipe dream. --Bjorn (talk) 02:59, 3 August 2025 (UTC)

Problems of definition are a well-discussed theme nowadays, growing bigger while being mostly ignored at the same time in the world in general. When it comes to superintelligence, basically indeed it's all extrapolating from human intelligence. "Somewhat like a human mind, but maybe not exactly like that" is the more general idea if you ditch the "super-" part.
"Learning" is a much easier question to deal with, as there's the basic stuff -- conditioning and comparing responses to stimuli in different situations -- which is far more general than looking at humans. Accordingly, insects often have something which the currently latest and greatest "AIs" lack, even though said insects lack marketable skills to go along with their very general, very rudimentary ability to learn. "General intelligence" can be considered at greatly subhuman levels too, which is an interesting topic, as if AI could develop that -- artificial creatures with simpler "minds", that live their own lives and learn/change from interacting with their environments, for a start -- then there may be a way to eventually scale up from that achievement to the hopes of AI developers. But conflicting definitions and ideas are in use.
The religious character/parallels of most hopes and dreams and fears surrounding AI are worth covering at least briefly in this article too. --ApooftGnegiol (talk) 13:17, 16 September 2025 (UTC)