User talk:195.181.174.143

From RationalWiki
Jump to navigation Jump to search

However you feel about Eliezer Yudkowsky as a person or as an academic (or the fact that he isn't one), his hypothesis makes alot of sense. Tell a General AI to make paperclips, and it will do just that. General AI is already superhuman before it even starts recursive self-improvement. And, on the hard take-off Intelligence Explosion scenario that Yudkowsky fully endorses, doesn't that make sense too? Afterall, a computer isn't nearly as limited as us, when it comes to improving our intelligence. If we took a bunch of scalpels, lasers, and other nasty tools, cracked our skulls open and started reworking things it would almost certainly end very badly. A General AI just has to edit software. It doesn't even have to edit its own software, it could just make a better version of itself which would make a better version of itself and so on until it's a billion to a trillion times smarter than any human on the planet or every human on the planet combined. It doesn't matter if it's concious or not. All that matters is that it has a goa, and before you protest with "machines can't have goals", a heat seaking missle has a goal. If you tell it to make paperclips, it won't just decide that it thinks that's stupid based on the virtue of its greater intelligence. That's an intuitive, anthropomorphic idea on how AI would "think". Even humans don't work that way. We have tons of biological preferences shaped by years of evolution guiding our thoughts and actions 24/7. It doesn't matter if AI is concious or not.

If we consider intelligence as the ability to achieve goals, then if a super-AI has the goal of "make paperclips", then it's going to do that. It's not going to stop doing that. It will stop people trying to stop it from doing that when it starts tiling the world and reachable universe with paperclip factories using the immense capability produced by its intelligence. It won't do it because it's evil or a psychopath. It will do it because we told it to.

Obviously I can't provide any concrete scientific proof to support any of this, but how does it not make sense? How is this not a rational idea?

On his twitter he calls this era an era of "inadequate" AI alignment research. Inadequate- lacking the quality or quantity required; insufficient for a purpose. Do you understand? It's not enough. Yudkowsky has never given any claims on the probability of our demise, but he's got me shaking in my boots. — Unsigned, by: Samiac99 / talk / contribs



Information icon.svg This is the discussion page for an anonymous user who has not created an account yet, or who does not use it.

We therefore have to use the numerical IP address to identify them. Such an IP address can be shared by several users. If you are an anonymous user and feel that irrelevant comments have been directed at you, please create an account or log in to avoid future confusion with other anonymous users.