Talk:Artificial intelligence/Archive1

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 4 October 2022. Please do not make edits to this page.
Archives for this talk page:  , (new)(back)

Comment[edit]

The last sentence, unless cited, would seem to be more of a statement of opinion than an objective statement of fact. Rational Edfaith 13:40, 14 April 2008 (EDT)

Artificial Stupidity[edit]

When adding new text to this page, feel free to add a parody of the same text in the page of Artificial Stupidy (so guests can more easily see the parallels). — Unsigned, by: Plasmageek / talk / contribs

"no longer taken seriously"[edit]

I'll go ask the AI guys tomorrow. Tytalk 01:07, 25 October 2011 (UTC)

Maybe I'm just not skeptical enough, but I fail to see how someone could deny the eventuality of advanced AI. To me, it's only a question of when, not if.
Then again, I'm also a transhumanist, and I think it's pretty much inevitable that scientists are going to figure out how to pout our brains into robots or something -- unless, of course, we kill each other off before then.Fallacy (talk) 03:17, 25 October 2011 (UTC)
It seems to me that the, "AI can't be done" argument is just a form of dualism. The suggestion seems to be that there is "something" in the biological human brain which cannot de duplicated in silicon (or whatever). I can see no logical reason why not.
Having said that, I can see two enormous problems which mean that we'll take a long time to get there
  1. We have nowhere near a complete understanding of the thing we are tying to duplicate. For example, there is still intense debate about the nature of (or the existence of free will) along with the nature and function of consciousness. It does not look as though we will be resolving these questions any time soon.
  2. Even if we had a clear idea about how these things work in the human brain we are also nowhere near being able to simulate such things in software.
So given that we neither know what we want to achieve nor how to achieve it it's going to be a long haul.
As for the transhumanist idea that we will be able to copy brain states into machines we simply have no evidence that this will be possible. As far as I am aware this is simply a matter of faith in the transhumanist movement.--Bob
I consider it fairly obvious that technology like that will become possible, sometime in the future (science marches on, after all). I see no evidence that the human mind is anything other than a complex organic machine. Maybe I do just have faith in transhumanism/science/what have you. Fallacy (talk) 17:54, 25 October 2011 (UTC)
Up to the faith bit I agree with you. We have evidence that brains and minds exist because we see biology producing many different types. That tells us that it can be done by nature and gives us some confidence that we should be able to reproduce the feat because we know it can be done.
But we have never seen a "mind" transferred. We do not even have a very good idea of what a "mind" is. Given this, how can you have faith that a mind can be transferred? --BobSpring is sprung! 20:21, 25 October 2011 (UTC)
I don't -- I just think there's a very high chance we'll be able to someday in the future. Fallacy (talk) 20:48, 25 October 2011 (UTC)
The AI guys agreed with Bob. Tytalk 15:07, 25 October 2011 (UTC)
Can I toss in 10 cents from the language side? You all are computer people, but I spend my time reading about language, memory, speech, visual recognition, etc., and from everything I've read, we have no idea how any of that works. I mean, we know "it's electro-chemical", but that's it. by looking at people who have had a stroke, for example, we can see some of the way the brain interacts with itself. But programing a computer to "look for a face" when given the description of a face is quite different from a new born baby, cognitive mapping her world to see "faces" "happy faces" "danger" etc., much less "language", the idea that "things will drop - always - if you let them go", "even though something goes behind a panel, it has not disappeared" etc. WE don't understand how humans learn this yet. and until we do (and i agree with you all that it's just about time, not "impossible') we can't make a machine do the same. Right now, all machines can do is copy us. Based on patterns we provide. Pink mowse.pngGodotTue pour toujours, et tu veux vivre aussi. 15:22, 25 October 2011 (UTC)
Hey! I'm not a "computer person"! Take care with them there generalisations.--BobSpring is sprung! 16:23, 25 October 2011 (UTC)
OOps. but you know so much more about it than I do. That sorta "by default" makes you "computer person" to me. Let me guess, at home, your mom and dad and your aunt Martha all call you for tech hlep, right? --Pink mowse.pngGodotTue pour toujours, et tu veux vivre aussi. 16:44, 25 October 2011 (UTC)
There you go with the assumptions again! Being not too far shy of 60 I'm a bit short of elderly relatives and my scientific knowledge is largely based on a life-long subscription to New Scientist. :-) --BobSpring is sprung! 16:50, 25 October 2011 (UTC)
Damn. that's 2 for 2. I should quite while i'm only a little behind. hehe.--Pink mowse.pngGodotTue pour toujours, et tu veux vivre aussi. 16:52, 25 October 2011 (UTC)
And that's another thing you've got wrong because .... Ah. You didn't say anything..... Well. that's all right then. --BobSpring is sprung! 17:13, 25 October 2011 (UTC)
The fact that we're far from making an actual AI means that it's reduced to little more than philosophical musings. The actual scientists and engineers hammering away at the computer power and the coding needed for this, and the psychologists and neurologists looking into ourselves to see how we tick are legitimate activities. But then there's people bullshitting about how AI is going to lead to a singularity, that is the stuff that isn't taken seriously by skeptics. I'd put it on par with cryonics and nanobots. ADK...I'll swirl your driptray! 15:46, 25 October 2011 (UTC)
But what about a nanobot singularity? That makes human popsicles for some unfathomable reason? Tytalk 16:01, 25 October 2011 (UTC)
"...that is the stuff that isn't taken seriously by skeptics." Well, most "skeptics" at least. Nebuchadnezzar (talk) 20:23, 25 October 2011 (UTC)

When[edit]

... will there be a society and/or article devoted to the rights of entities with artificial intelligence? (Marvin the Paranoid Android, Star Trek Data, B7 Orac etc)? 212.85.6.26 (talk) 17:54, 8 December 2011 (UTC)

Once you have an intelligence, we'll support your rights! Course, the burden of proof will be on you, which might be a problem....Pink mowse.pngGodotI live in the Infinite monkey cage 18:17, 8 December 2011 (UTC)
AI is nowhere near the level of reasoning that Hollywood sells us for it to be at. There are no Johnny Fives, no Bicentennial Men, no Cylons, no VIKIs. Newton and Siri don't even have the capacity to return results that are creative. Show me a machine that can register all six levels of Bloom's Taxonomy (which some humans can't even register) and maybe we can start talking about rights. -- Seth Peck (talk) 18:22, 8 December 2011 (UTC)
I feel that we should have an article devoted to the rights of entities with artificial intelligence when such an entity is capable of writing one.--BobSpring is sprung! 18:39, 8 December 2011 (UTC)
Though to be honest, I've met quite a few people on line who I would swear have artificial intelligence. Not exactly of the computer generated type, either.Pink mowse.pngGodotI live in the Infinite monkey cage 18:43, 8 December 2011 (UTC)

And the concept has not appeared much in SF either (AFAIK).

Another component of the discussion - when Dave Bowman switches off Hal's 'entity files' what is the legal definition of his action? Self-defence-in-reaction has to be proportionate to the original offence. 82.44.143.26 (talk) 15:45, 9 December 2011 (UTC)

Same thing that occurs when you shut off or reformat a malfunctioning operating system. -- Seth Peck (talk) 15:59, 9 December 2011 (UTC)

The point at which the issue shifts from 'techie stuff' to 'lawyers with fees' is the point at which AI actually becomes an issue. 82.44.143.26 (talk) 16:02, 9 December 2011 (UTC)

The point at which this becomes important is the point when AI's actually become remotely possible. I suspect that this will not be in the foreseeable future.--BobSpring is sprung! 16:56, 9 December 2011 (UTC)

'Reason number XXZ for snarking SF writers'? Noting that there is more to AI than answering a few questions behind a curtain? Given the case for human rights and animal rights what about plant rights and construct rights? 212.85.6.26 (talk) 18:29, 13 December 2011 (UTC)

Stages in the evolution of AI[edit]

The first stage has been achieved - they decide how to interprete your instructions (autocorrect selects the most inappropriate word possible etc).

'Cat typing' produces meaningful results.

Checkout-point newspapers aimed at computers, and AIs standing/running for election. 171.33.222.26 (talk) 16:36, 4 November 2013 (UTC)

Oh dear, my AI software hates it when you anthropomorphise it like that. Innocent Bystander (talk) 16:44, 4 November 2013 (UTC)

Computers do 'creatively rearrange' what you do if you do not keep an eye on things.

'Cat typing' exists.

Waht would humans, aliens and 'constructed sentient entities' make of each others' magazines and the problem pages therein? (Probably more categories would overlap than might be expected - fashion disasters, diets and relationships, celebrity house/clothes/makeup makeover...) 171.33.222.26 (talk) 17:26, 4 November 2013 (UTC)

Dreyfus's critique[edit]

I added a short paragraph about Hubert Dreyfus's critique of AI research after the bit about Searle. If anyone more familiar with his work could add onto it, especially regarding his specific critiques of the assumptions of early AI research I'd appreciate it. By the way, I use the word "early" here simply because I assume that AI researchers have caught on to his critique. My knowledge is taken from mostly second-hand resources, so any discussion of the matter might be fruitful.

How AI will develop[edit]

Most probably [1]. 82.44.143.26 (talk) 18:11, 9 June 2015 (UTC)

AI on Twitter[edit]

Should there be a mention of the foul-mouthed UKIP-supporting bot on Twitter? 31.51.113.172 (talk) 22:32, 25 March 2016 (UTC)

Do Robots Deserve Rights? What if Machines Become Conscious?[edit]

KurzGesagt is a ****ing treat. Reverend Black Percy (talk) 11:34, 24 February 2017 (UTC)

When computers become sentient[edit]

There will be CRationalWiki - with articles on Human Woo, and [surviving the apocalypse]. 82.44.143.26 (talk) 15:13, 30 June 2017 (UTC)

This is generally wrong.[edit]

"Artificial Intelligence" is a word coined by Marvin Minsky that in the computer science community is understood to refer to techniques and devices that exhibit intelligent - as in, "optimal", problem solving behaviour.

What this article seems to refer to is so-called "AGI", or Artificial general intelligence.

CAPTCHA[edit]

I read somewhere - the Captcha pictures are used to train AI (the computer learns by seeing what people select). Is this the case? 82.44.143.26 (talk) 15:39, 11 June 2019 (UTC)

It was certainly the case with Google's ReCaptcha. Their first generation showed users two words their book-reading software had trouble identifying, the notion being that the low certainty one of the pair would train the AI, and the high-certainty one would verify that the user was human.
With gen 2, they've now moved on to those "click all that contain traffic lights" to train their image recognition systems. ikanreed 🐐Bleat at me 15:46, 11 June 2019 (UTC)
Can they be trained with 'traffic light biscuits'? (See web for recipes.) 82.44.143.26 (talk) 15:49, 2 July 2019 (UTC)