Essay:Why there is no problem of friendly AI

From RationalWiki
Jump to navigation Jump to search
Essay.svg This essay is an original work by Baloney Detection.
It does not necessarily reflect the views expressed in RationalWiki's Mission Statement, but we welcome discussion of a broad range of ideas.
Unless otherwise stated, this is original content, released under CC-BY-SA 3.0 or any later version. See RationalWiki:Copyrights.
Feel free to make comments on the talk page, which will probably be far more interesting, and might reflect a broader range of RationalWiki editors' thoughts.

The mission of the Singularity Institute for Artificial Intelligence is to ensure that when a human-level (or thereabout) AI is created, it will be ”friendly” to humans. [1] They believe that an AI would (or at least theoretically could) continuously improve itself, or build better versions of itself, and thus rapidly rise far above humans in intelligence. It is typically presented with doomsday rhetorics as a problem that needs to be solved quickly.

I want to argue that the problem of “friendly AI” is not a problem at all. What you need to understand is Hume's law. As you can read from the link, the idea is that no mere facts will motivate you to do anything. What Hume called “passions” are what trigger you and drive you. Facts can help you achieve your goals, but they can’t set them. I’m not going to argue right now why I think Hume was right, but will instead direct you to the link here.[2] If the subject of Hume’s law turns out to be contentious though, I might write a separate essay to explain why I think it is correct.

With this is mind, consider an AI. The goals it would try to reach would have to be set by its programming. The AI would try to achieve the goal set by its programming. It wouldn’t causally decide to make humanity go extinct (unless, of course, it’s programmed with that purpose).

If you have played strategy computer games, this should be familiar to you. Remember good old Age of Mythology? If you play a skirmish game, you can decide what “personalities” the AI players should have. Do you want them to be aggressive rushers? Or do you want them to focus on their economies? Your selection will determine what the AI will strive toward. An AI doesn’t strive toward things randomly. It strives toward what it has been programmed to strive toward. If it has not being programmed to strive to any particular goal or try to achieve anything, it won’t do anything.

Want to create friendly AI? Define “friendly” and then program the AI to try to achieve that. In addition, for the AI to achieve anything, it must be given resources. It can’t just “think” what it will do to achieve its goals, it must be given some mean to do that. Just existing on a computer as a program will give it very little possibility to affect much in the world. Humans would have to enable the AI to do whatever it is programmed to do in a whole lot of ways in order for it to achieve anything.