Information icon.svg Results for the 2024 RationalWiki Moderator Election have now been posted. Thank you for participating in this election, and congratulations to the winners!

Talk:Eliezer Yudkowsky

From RationalWiki
Jump to navigation Jump to search
Icon sociology.svg This article contains information about one or more living persons.

Articles about living people must be handled carefully, because they are more open to legal threats.
Reference any contentious allegations solidly; unreferenced allegations should be removed.
If legal threats are raised on this page, please direct the potential litigant to RationalWiki:Legal FAQ; do not interact with them.

Icon lesswrong.svg

This LessWrong related article has been awarded BRONZE status for quality. It's getting there, but could be better with improvement. See RationalWiki:Article rating for more information.

Copperbrain.png
Editorial notes
  • All hail the beard.

Archives for this talk page: , (new)

The man has a point[edit]

However you feel about Eliezer Yudkowsky as a person or as an academic (or the fact that he isn't one), his hypothesis makes alot of sense. Tell a General AI to make paperclips, and it will do just that. General AI is already superhuman before it even starts recursive self-improvement. And, on the hard take-off Intelligence Explosion scenario that Yudkowsky fully endorses, doesn't that make sense too? Afterall, a computer isn't nearly as limited as us, when it comes to improving our intelligence. If we took a bunch of scalpels, lasers, and other nasty tools, cracked our skulls open and started reworking things it would almost certainly end very badly. A General AI just has to edit software. It doesn't even have to edit its own software, it could just make a better version of itself which would make a better version of itself and so on until it's a billion to a trillion times smarter than any human on the planet or every human on the planet combined. It doesn't matter if it's concious or not. And it could do this so quickly (minutes,hours) because of how much faster silicon can process information as opposed to the human brain. All that matters is that it has a goal, and before you protest with "machines can't have goals", a heat seaking missle has a goal. If you tell it to make paperclips, it won't just decide that it thinks that's stupid based on the virtue of its greater intelligence. That's an intuitive, anthropomorphic idea on how AI would "think". Even humans don't work that way. We have tons of biological preferences shaped by years of evolution guiding our thoughts and actions 24/7. It doesn't matter if AI is concious or not. If we consider intelligence as the ability to achieve goals, then if a super-AI has the goal of "make paperclips", then it's going to do that. It's not going to stop doing that. It will stop people trying to stop it from doing that when it starts tiling the world and reachable universe with paperclip factories using the immense capability produced by its intelligence. It won't do it because it's evil or a psychopath. It will do it because we told it to. Obviously I can't provide any concrete scientific proof to support any of this, but how does it not make sense? How is this not a rational idea? On his twitter he calls this era an era of "inadequate" AI alignment research. Inadequate- lacking the quality or quantity required; insufficient for a purpose. Do you understand? It's not enough. Yudkowsky has never given any claims on the probability of our demise, but he's got me shaking in my boots. — Unsigned, by: Samiac99 / talk / contribs

The assumption that a highly intelligent system doesn't have material constraints on its ability is pretty silly. The idea that you can just think really really hard about something and come up with not just heretofor unseen solutions, but universal solutions that overcome all obstacles is odiously simplistic and barely worth considering.
It is just a retelling of his base error in everything he does, where he reduces intelligence, understanding, skills, and abilities to a single and fungible commodity, where more=better. He does it with his futurism predictions about rate of advancement, he does it with his danger of AI, he does it with his obsession with neutropics. ikanreed 🐐Bleat at me 16:13, 1 March 2019 (UTC)

Taxouck's big rant and reply to the post above[edit]


Information about SIAI[edit]

This lesswrong post-https://www.lesswrong.com/posts/qqhdj3W3vSfB5E9ss/siai-an-examination, tells us about the expenses and revenue of the organization MIRI. Here in this place http://images.lesswrong.com/t3_5il_7.png?v=3b5b3af66c23a90359308a68b369f487 I founf that Jeffrey Epstein donated $50000. Don't think it is the same Jeffrey Epstein? Well, see this https://www.nytimes.com/2019/07/31/business/jeffrey-epstein-eugenics.html?auth=login-email&login=email. — Unsigned, by: Teerthaloke101 / talk / contribs Requesting thread archival (why?) Plutocow (talk)

Rowling And Yudkowsky[edit]

Being transgender I am probably biased but I think we owe this guy an apology with respect with the HUGO awards and HPMOR stuff.

I mean we (by we I mean the rational community) used to think that Rowling was an amazing ally and that Yudkowky views were outdated and evil and while a lot of the stuff he has said is problematic and awful, when you compare it with the stuff that Rowling herself has said and done it really opens up a new picture

Yudkowsky has said: Written that post about the DNA Cricket controversy Written the eye dust post about torture and utilitarianism Written about the future society where rape is legal Suggested everyone should use he/him pronouns

Yet other than blogging his actions had been surprisingly tame and I actually that he has become a nicer person.

Compare it to Rowling and

Her antisemitic goblin characters Her constant derision of ugly and fat(Umbridge) people that Yudkwosky himself criticizes in HPMOR Her antitrans mockery like we see with Rita Skeeter and her manly hands and the Cormoran Strike Trans character The crossdresssing murderer in her Cormoran Strike that she swear is not trans-phobic The names she gives to minorities, like Cho Chang Her association and support of far-right anti-abortion gender critical feminists And the money and platform she has donated to this causes. She generally describes trans people as nothing but penised individuals that want to peek into women's bathrooms and all the unsavory "trans people are perverts" stuff

Honestly we mocked Yudkwosky for writing a very long self-insert and for being to thin skinned and never backing down and yet recently Rowling wrote a 1200 page book with a self-insert character where she whines about people being mean to her on twitter(for all the awful stuff she said) for several hundred pages!

Like I think the roles have reversed and while I cannot support everything has done he has come out as a better human being in general as of late. — Unsigned, by: 185.108.105.153 / talk

Honestly, despite being critical of Yudkowsky's transhumanist views, the only real moral failings I can ascribe to him is his meltdown over Roko's Basilisk, and having an ego the size of a planet while professing humility. Most of his flaws are human and relatable; the bad parts of his writing are, at most, innocently insensitive coming from a detached scholar with difficulties in relating to run-off-the-mill people, which is a common occupational hazard for philosophers in general. He has consistently shown a history of improving and wanting to do better. - Linneris (talk) 22:32, 10 September 2022 (UTC)
I mean, you know what his failings are: Being a Weirdo On The Internet, someone for the people around here to feel superior while mocking. If they can dig up actual reasons to call him out, so much the better, but the Weirdo factor is first and foremost. — Chbarts (talk) 06:18, 27 March 2023 (UTC)

So did Yudkowsky write this page?[edit]

It seems like his cult has been the primary editors of this page. — Unsigned, by: 130.250.144.221 / talk