Talk:Eliezer Yudkowsky

From RationalWiki
Jump to: navigation, search
Icon sociology.svg This article contains information about one or more living persons.

Articles about living people must be handled carefully, because they are more open to legal threats.
Reference any contentious allegations solidly; unreferenced allegations should be removed.
If legal threats are raised on this page, please direct the potential litigant to RationalWiki:Legal FAQ; do not interact with them.

Icon pundit.svg

This bloggers related article has not received a brainstar for quality. Please consider expanding the article appropriately. See RationalWiki:Article rating for more information.

Steelbrain.png
Editorial notes
  • All hail the beard.

Archives for this talk page: <1>, (new)


Nonsense and fun[edit]

http://blogs.scientificamerican.com/cross-check/ai-visionary-eliezer-yudkowsky-on-the-singularity-bayesian-brains-and-closet-goblins/

Horgan: Will superintelligences solve the “hard problem” of consciousness?

Yudkowsky: Yes, and in retrospect the answer will look embarrassingly obvious from our perspective.

Horgan: Will superintelligences possess free will?

Yudkowsky: Yes, but they won't have the illusion of free will.

Horgan to himself:

But they do not have the illusion of free will. They have. He said. This. His volume, his white fire, his nonsense, it struck me as the most absurd I've ever heard. I was simply grounded, stuck to the spot. Not any more meanignless phrase had ever been spoken, it was like a newborn child inventing a "new" language, a miscounstrued version of english with half of its messy coherence.

He stood expectant almost asking to nod my head and congratulate him in his philological provenance. And so I looked, I looked on, with his maddened stare at my back and pile of a thousand and hastily written blogposts and webprose on my front, and it only confused me more.

I was incontinent, swirling by that point, maybe this was one of his charms. I had been shanghaied and fallen in into Alice and the postmodenist bullcrap-hole, millions of lesswrongers gathered around me dressed in sumo-tongs; they bullied me into compliance chanting in Evanescence tones of the world to come, of the beard to grow, the mirth and joy of one man risen above them all, ushering in a world of magical wonder with android-gynoid bots ready to satiate my every whim and vuglar, manic appetite...

And as he dragged his laborious and redundant topic half a mile, jostled with own sentece and word, contradicting and kurbstumping the head of the dumy stramen, those who had their cluttered minds battered into his muddied prose he rose. He rose to the cheers of a new religion, of a novelty toy, he had the carcass of Lord Vodemort and I beged and prayed, having been taken in to this strange palace of crooked ways.

My blubbery dread had come to a head, I had to know, had to extirpate the earworm of this new philosophy, the first virgin post that had started all, all those years ago into my youthful and naive Horgan-self. And I realized that before my doubt had torn me I had to ask, What did you mean?

But by then, by then He was gone.

Fuck. Sickening, hopeless.

Gone like the Dark Knight, gone like Deep Throat, simply gone. - — Unsigned, by: Razzledazzle / talk / contribs

what this?[edit]

I don't understand something how people can be promoters pseudoscience and be relevant to rationalism at the same time?( i about hashtags at the bottom of the article). — Unsigned, by: 195.181.174.143 / talk

The man has a point[edit]

However you feel about Eliezer Yudkowsky as a person or as an academic (or the fact that he isn't one), his hypothesis makes alot of sense. Tell a General AI to make paperclips, and it will do just that. General AI is already superhuman before it even starts recursive self-improvement. And, on the hard take-off Intelligence Explosion scenario that Yudkowsky fully endorses, doesn't that make sense too? Afterall, a computer isn't nearly as limited as us, when it comes to improving our intelligence. If we took a bunch of scalpels, lasers, and other nasty tools, cracked our skulls open and started reworking things it would almost certainly end very badly. A General AI just has to edit software. It doesn't even have to edit its own software, it could just make a better version of itself which would make a better version of itself and so on until it's a billion to a trillion times smarter than any human on the planet or every human on the planet combined. It doesn't matter if it's concious or not. And it could do this so quickly (minutes,hours) because of how much faster silicon can process information as opposed to the human brain. All that matters is that it has a goal, and before you protest with "machines can't have goals", a heat seaking missle has a goal. If you tell it to make paperclips, it won't just decide that it thinks that's stupid based on the virtue of its greater intelligence. That's an intuitive, anthropomorphic idea on how AI would "think". Even humans don't work that way. We have tons of biological preferences shaped by years of evolution guiding our thoughts and actions 24/7. It doesn't matter if AI is concious or not. If we consider intelligence as the ability to achieve goals, then if a super-AI has the goal of "make paperclips", then it's going to do that. It's not going to stop doing that. It will stop people trying to stop it from doing that when it starts tiling the world and reachable universe with paperclip factories using the immense capability produced by its intelligence. It won't do it because it's evil or a psychopath. It will do it because we told it to. Obviously I can't provide any concrete scientific proof to support any of this, but how does it not make sense? How is this not a rational idea? On his twitter he calls this era an era of "inadequate" AI alignment research. Inadequate- lacking the quality or quantity required; insufficient for a purpose. Do you understand? It's not enough. Yudkowsky has never given any claims on the probability of our demise, but he's got me shaking in my boots. — Unsigned, by: Samiac99 / talk / contribs

The assumption that a highly intelligent system doesn't have material constraints on its ability is pretty silly. The idea that you can just think really really hard about something and come up with not just heretofor unseen solutions, but universal solutions that overcome all obstacles is odiously simplistic and barely worth considering.
It is just a retelling of his base error in everything he does, where he reduces intelligence, understanding, skills, and abilities to a single and fungible commodity, where more=better. He does it with his futurism predictions about rate of advancement, he does it with his danger of AI, he does it with his obsession with neutropics. ikanreed 🐐Bleat at me 16:13, 1 March 2019 (UTC)

Taxouck's big rant and reply to the post above[edit]

Information about SIAI[edit]

This lesswrong post-https://www.lesswrong.com/posts/qqhdj3W3vSfB5E9ss/siai-an-examination, tells us about the expenses and revenue of the organization MIRI. Here in this place http://images.lesswrong.com/t3_5il_7.png?v=3b5b3af66c23a90359308a68b369f487 I founf that Jeffrey Epstein donated $50000. Don't think it is the same Jeffrey Epstein? Well, see this https://www.nytimes.com/2019/07/31/business/jeffrey-epstein-eugenics.html?auth=login-email&login=email. — Unsigned, by: Teerthaloke101 / talk / contribs