Talk:Artificial intelligence

From RationalWiki
Jump to navigation Jump to search
Icon science.svg

This science related article has not received a brainstar for quality. Please consider expanding the article appropriately. See RationalWiki:Article rating for more information.

Steelbrain.png

Archives for this talk page: , (new)


Google[edit]

Should there be some mention of [1]? (other sources available) Anna Livia (talk) 22:42, 5 July 2022 (UTC)

Will add [2]. Anna Livia (talk) 12:54, 23 July 2022 (UTC)
It could go with some more general discussion of the problems of people not being so discerning about what is and is not intelligent (i.e. easy to fool) and also (maybe along with it) chat programs falsely being promoted as intelligent. Maybe a section about good old chatbots and what they are and are not. Here's another article about people being confused about chatbots.[3] (But your second link, I didn't understand.) --ApooftGnegiol (talk) 18:51, 24 July 2022 (UTC)
Put in a better link (if not the original one) - there is a case for 'multiple clipboards (for use by non-experts) on occasion. Anna Livia (talk) 12:34, 28 July 2022 (UTC)

Applying a little skepticism to testing for intelligence[edit]

If a stage magician does something that looks superhuman, does this mean that something powerful beyond what is currently known has been observed, or simply that the limits of your powers of observation and judgment have been overcome?

If a computer program writes something that seems human, does this mean that an intelligent being is writing it, or simply that the limits of your powers of observation and judgment have been overcome?

By the same logic that the Turing Test is accepted by many, the first case should also be accepted. You should then accept as genuinely real anything that a stage magician is able to pull off in such a way that people can't account for it and are impressed. It is plainly a stunning lack of the most simple skeptical thinking (in one particular area) that leads people to accept the Turing Test as basically sensible at all.

This does not exclude the possibility that genuinely intelligent artificial beings could be created, nor the possibility that, say, a stage magician could acquire some fancy new technology that is able to create an unusual impression.

I think this article is not much of a skeptical article if it doesn't bring this basic issue up. And also the related issue of anthropocentric thinking at the core of the Turing Test. The ability to convince humans of humanness is not only not a good metric for intelligence, but also one that would plausibly lead to the rejection of the intelligence of most potential non-human forms of intelligent life that may exist, were humans to come into contact with them.

The article, at least the introduction, is also currently a mess because it juxtaposes different ideas labeled AI without making clear what relates to what how. --ApooftGnegiol (talk) 23:39, 23 July 2022 (UTC)

The Turing Test - though widely touted - is absolutely terrible. For the reasons you state.
It does not test the thing which proports to be intelligent; rather it tests the people looking at the test. The less knowledgeable the audience is the more likely the program is to pass the test.Bob"Life is short and (insert adjective)" 06:53, 24 July 2022 (UTC)
Could one say - the Turing test is of its time (and, partly, of the then current SF concerning AI), so a more modern version is needed? (And where would the chatbot that was switched off because it picked up on the wrong sort of views uttered by those in the discussion group fit?) Anna Livia (talk) 12:39, 28 July 2022 (UTC)

Power consumption[edit]

So what time period does the wattage/megawattage refer to?

And for another example of 'the sky is falling' prognostication will mention [4] (the obvious typo - 'pus' rather than 'puts' does somewhat spoil the argument). Anna Livia (talk) 20:10, 16 September 2022 (UTC)

Updates, is it shaping up?[edit]

There's more sources and text now than when the article was tagged as lacking them in October 2022. There's less confusion in the text too, though I'm not sure whether or not every detail is accurate. Also, I very recently expanded some to cover 2023's LLM developments and its relation to AGI. It's still not great, but maybe it's not at "significantly problematic" level anymore. Thoughts? --ApooftGnegiol (talk) 19:15, 27 January 2024 (UTC)

With @Panzerfaust's and my own edits, I think the "risks" section is now basically reasonable (though could be turned into more than a list-shaped section). I've now set the level to simply no brainstar. --ApooftGnegiol (talk) 23:47, 2 February 2024 (UTC)

Is this useful for our article?[edit]

https://www.sagaftra.org/videogamestrike

Several videogame voice actors are, apparently, going on a strike due to AI protections. Arcadium Trancefer (talk) 21:48, 30 July 2024 (UTC)