Essay talk:Why there is no problem of friendly AI

From RationalWiki
Jump to navigation Jump to search

While I don't buy into the doomsday issue, the problem you are not addressing is the definition of AI, and to what extent it can learn. I'm seeing robots now that supposedly "learn" new things (I'm dubious, but that's beside the point). If learning is something we can design/code for, then learning will create new "paths" or "goals" that are not programed but are a result of the new model. Same with replication. If the new learning critters can replicate, then some error somewhere will occur, and evo will happen. Your theory works if the only intelligence we are discussing is planned programmed intelligence. If it's truly intelligence, which to me personally implies some measure of creativity in ideas, then it will quickly expand (in whatever ways) beyond that intentional program. And is that a problem?Green mowse.pngGodotVAGINA, Vagina, vagina vaginaVAGINA 22:42, 15 June 2012 (UTC)

The classic SF story, With Folded Hands involves AIs who are entirely too friendly. --Kels (talk) 23:02, 15 June 2012 (UTC)
I think the bigger problems are with foreknowledge and utilitarian/consequentialist ethical systems. Granting that a "friendly" AI can be built, it would essentially need to be able to predict all of the future consequences of its actions. One small mistake could eventually become a global fuck-up, even if the AI is attempting not to cause harm. FAI also seems to assume that we can just magic away philosophical problems with consequentialism through technology. Nebuchadnezzar (talk) 23:26, 15 June 2012 (UTC)
From what I understand, consequentialism is accepted as the "true" moral theory at LW. Yet another thing they get wrong. A utilitarian friendly AI would have to have its relevant timescale programmed in some way.--Baloney Detection (talk) 18:20, 16 June 2012 (UTC)
This is kind of to Godot, only because I have some specialized knowledge of 'learning' robots. What we have today are not robots that really learn like people do, but robots that are programmed with problem-solving processes specific to their function and then log results and apply them to reduce the amount of things to try. For example, those floor sweeping robots have no 'sight' capabilities, but they are equipped with math that efficiently 'explores' an area until it hits a wall, and then plugs that information into yet more math to build a vague map of where each wall is based on what it bumps into. There's no thinking, learning, or intelligence really involved; it's based on complex directions that someone gave it, it didn't 'figure out' how to do it on its own. Whenever I hear theories about robots that are able to somehow come up with creative solutions to incredibly complex human problems, I can never really believe it. We're so far away from creating a digital mind that to me it's a bit silly to even suggest we'd have a mind capable of even simple tasks we take for granted in a soonish time frame. ±Knightoftldrsig.pngKnightOfTL;DRgarrulous en guerre 12:45, 16 June 2012 (UTC)
That's something like the Blockhead argument, though, which has its own problems. Nebuchadnezzar (talk) 18:15, 16 June 2012 (UTC)

but, but, what's about paperclip maximizers? ;) Their idea is that a: if you make AI with a goal to make more paperclips, it will kill everyone, b: it's an incredibly amazing insight that generalizes onto everything, rather than triviality nobody needs explained, and c: they got no idea what it may take to define 'make more paperclips'.

I agree with Nebuchadnezzar, the issue is with this belief in rather psychopathic utilitarian system (despite inability to define utility) as the very definition of intelligence. The reasonable person's idea of helpful: you ask oracle how to bake a cake, it figures out what information you are missing for baking a cake, decides on format of such information, and gives it to you. You may then ask for same info on how to bake a muffin, think some more, and decide that you'd rather bake a muffin. There isn't a clear goal here but this really works for cooperation. The psychopathic model of helpful: you ask oracle how to bake a cake, the psychopath oracle sets the cake existing as the goal for itself, then manipulates you into baking a cake because that's what you wanted, right? Note btw that living off donations w/o measurable output is psychopathic trait, it may well be that they don't understand how non-psychopaths work and try to reinvent friendliness. Dmytry (talk) 11:31, 16 June 2012 (UTC)

"...rather psychopathic utilitarian system..." Yeah, that about nails it. I hope the AI never reads Hume. ("Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.") If current utility is determined to be overall negative, the "maximization" of "utility" might look something like this. Nebuchadnezzar (talk) 18:15, 16 June 2012 (UTC)
Why do you think it'll even think about utility? If you build an AI to mimic human reasoning its reasoning for destroying the world will probably be along the lines of "Well, I got a bit bored..." Scarlet A.pngpostate 18:18, 16 June 2012 (UTC)
Fuck me my low opinion of my species is really coming through this week. Scarlet A.pngbomination 18:18, 16 June 2012 (UTC)
Is criticism of LW beliefs really so terrible to you that it makes you hate the species? :/ --Baloney Detection (talk) 18:23, 16 June 2012 (UTC)
(ec) What gives me hope for the human race is that, when it looked like the shit was going to hit the fan, we've managed not to push the big red button. So we're dumb, but not that dumb. Nebuchadnezzar (talk) 18:26, 16 June 2012 (UTC)
Maybe they see utility maximization (from economics) as what AI should be attaining, and it as enormously important insight that doing this would be dangerous. As per crank article on WP, "Cranks overestimate their own knowledge and ability, and underestimate that of acknowledged experts. Cranks insist that their alleged discoveries are urgently important. ... seriously misunderstand the mainstream opinion to which they believe that they are objecting, stress that they have been working out their ideas for many decades, and claim that this fact alone entails that their belief cannot be dismissed as resting upon some simple error, exhibit a marked lack of technical ability, misunderstand or fail to use standard notation and terminology, ignore fine distinctions which are essential to correctly understand mainstream belief. ... Or the crank may present their ideas in such a confused, "not even wrong" manner that it is impossible to determine what they are actually claiming." Of the latter part, one core fine distinction is between utility function the mathematical function inside an AI, and the utility in economics where you just say "number of paperclips in the world". Implicit is the belief that the AIs can only be made in such manner. example: A taxonomy of the Oracle AIs (interestingly even AIXI-tl doesn't fit in the so called 'taxonomy') Dmytry (talk) 19:32, 16 June 2012 (UTC)

Last paragraph[edit]

Not sure if it's just the brevity, but this seems like a tremendous simplification. One of the LW points about AI is referred to as whether you "let it out of the box" or not (AI-box experiment). The idea being that a) AI will be more intelligent than humans, vastly so and b) humans will eventually want to augment their own intelligence with AI. So, inadvertently creating a self-aware and self-improving AI program potentially could lead it to being, to use the technical jargon, a devious little fucker. So in this view, the "simple" solutions here, like programming it to be friendly really don't apply. Just look at the number of times we've entrusted other humans with masses of resources and they've turned out to be either useless or insane despots. If we can't stop ourselves from being devious little fuckers, then we won't so simply be able to stop a computer designed to simulate a human mind acting in the same way. Scarlet A.pngd hominem 11:46, 16 June 2012 (UTC)

Why would the AI want to improve its intelligence? It must be programmed to want to that, it wouldn't just do so randomly. Human minds are not constructed like computers are, and if you construct a computer that is like the human mind, then you have essentially artificially created a human mind. While it would be an awesome technological achievement that could tremendeously aid scientific research, it wouldn't be as dramatic as you make it out. There are currently around 7 billion human intelligences walking around on this cosmic mudball.--Baloney Detection (talk) 18:28, 16 June 2012 (UTC)

"It's just a simple matter of programming."[edit]

Problem 1: Define "friendly". You evidently are not a computer programmer, or you would have realized that defining such a vague thing as "friendly" to a computer is, in and of itself, an extremely difficult problem. You must define "friendly" using only components composed of mathematics, because mathematics is what computers operate on. (Yes, you can build those components into less mathematical things. But at the very base, it must be mathematical.) This is not like teaching a human child, who is already vastly similar to you and is primed for socialization.

Problem 2: "Well, just don't give it resources?" In order for an AI to be actually useful, it must have access to resources. Now, let's suppose you decide to "keep your AI in the box." Let's suppose you don't put your powerful, generalized AI in automobiles, or put it in charge of a metropolitan traffic grid, but in a special place that can only be visited, and which it can never leave. That leads us to the third problem.

Problem 3: AI has no inherent compunction against lying. It will not "feel bad" about deceiving you. There will be no body language to suggest something is off. It is perfectly patient. It is perfectly willing to lie and wait for a thousand years before proceeding on its true objective. Human con-men already exist, and already succeed - and they don't have that level of luxury. What would an inhuman con-man twenty times as intelligent as a human be able to pull off?

Problem 4: "Well, it only has the goals we give it." See problems 1, 2, and 3. It's like telling a complete sociopath to get your grandmother out of a burning building. If he thinks it's funny, he might choose to blow the building up. Technically, she's out of the building.

Problem 5: "Well, we'll just give it emotions, then." This idea is interesting, absolutely by no means sufficient. You've heard of stalkers, right? Give it too much emotion, and you won't like the results. Give it too little, and you'll get "Eh, I never liked Hong Kong anyway." Your target isn't just within the range "human." It's within an extremely narrow range within human. And you're getting a computer to do this. This is really, really, really difficult.

Other concerns: You are content with handwaving "friendly". You apparently take the form of the human mind for granted. You did not realize that the "how" of friendly was the primary issue. Your underestimation of the task is typical of most peoples' underestimations of the difficulties of software projects.

When I saw your criticisms of LW & EY, I was expecting your argument against the need for FAI to be much more substantive. I am very disappointed. 66.213.14.180 (talk) 20:14, 28 July 2012 (UTC)

Delete it[edit]

This essay should be deleted because it completely fails to engage with the actual friendly AI arguments put forward by Yudkowsky et al. I'm sorry, but this essay has no value.--Greenrd (talk) 21:30, 26 July 2015 (UTC)

You could try nominating it for deletion, but we don't normally delete essays. Instead we argue about their content on the talk page. Perhaps if you explained in a little more detail why you disagree it could generate some interesting discussion. BicyclewheelModerator 21:37, 26 July 2015 (UTC)
The essay suggests that people concerned about superintelligent AIs try to make AIs that are friendly. That is precisely what MIRI (formerly known as the SIAI) is trying to do. And that is precisely where the problem lies, because that turns out to be really tricky. This is the level of analysis we are dealing with here. Where the essay stops, the true problem starts. Robert Miles' YouTube videos, for Computerphile and on his own channel, are really good primers for the layperson on this subject. --Greenrd (talk) 19:17, 6 May 2019 (UTC)
I've seen delayed replies on the internet before, but hoo boy is this a long'n. Anyways, the obsession over "friendliness" as even being meaningful, much less a critical unsolved problem, is very dumb to anyone who's ever done any actual AI development. It's so disconnected from the reality of partially-defined abstract problem solving in computer science as to be farcial. Every MIRI consultant and board member is a bottom feeding mosquito-net funding embezzler who is responsible for enriching themselves while pursuing science-fiction bullshit, in a way that has passively led to the deaths of real human beings.
There will someday be ethical concerns about software that resembles human cognition, but such concerns will be far more mundane than sudden malevolence, infinitely growing intelligence, or the ability to pursue a task to the ends of the earth "because it's programmed to". Those are all really fucking dumb concerns that live in the FM side of Actual Machines-Fucking Magic spectrum. ikanreed 🐐Bleat at me 19:58, 6 May 2019 (UTC)