Talk:Eliezer Yudkowsky

From RationalWiki
Jump to navigation Jump to search
Icon sociology.svg This article contains information about one or more living persons.

Articles about living people must be handled carefully, because they are more open to legal threats.
Reference any contentious allegations solidly; unreferenced allegations should be removed.
If legal threats are raised on this page, please direct the potential litigant to RationalWiki:Legal FAQ; do not interact with them.

Icon lesswrong.svg

This LessWrong related article has been awarded BRONZE status for quality. It's getting there, but could be better with improvement. See RationalWiki:Article rating for more information.

Copperbrain.png
Editorial notes
  • All hail the beard.

Archives for this talk page: , (new)

The man has a point[edit]

However you feel about Eliezer Yudkowsky as a person or as an academic (or the fact that he isn't one), his hypothesis makes alot of sense. Tell a General AI to make paperclips, and it will do just that. General AI is already superhuman before it even starts recursive self-improvement. And, on the hard take-off Intelligence Explosion scenario that Yudkowsky fully endorses, doesn't that make sense too? Afterall, a computer isn't nearly as limited as us, when it comes to improving our intelligence. If we took a bunch of scalpels, lasers, and other nasty tools, cracked our skulls open and started reworking things it would almost certainly end very badly. A General AI just has to edit software. It doesn't even have to edit its own software, it could just make a better version of itself which would make a better version of itself and so on until it's a billion to a trillion times smarter than any human on the planet or every human on the planet combined. It doesn't matter if it's concious or not. And it could do this so quickly (minutes,hours) because of how much faster silicon can process information as opposed to the human brain. All that matters is that it has a goal, and before you protest with "machines can't have goals", a heat seaking missle has a goal. If you tell it to make paperclips, it won't just decide that it thinks that's stupid based on the virtue of its greater intelligence. That's an intuitive, anthropomorphic idea on how AI would "think". Even humans don't work that way. We have tons of biological preferences shaped by years of evolution guiding our thoughts and actions 24/7. It doesn't matter if AI is concious or not. If we consider intelligence as the ability to achieve goals, then if a super-AI has the goal of "make paperclips", then it's going to do that. It's not going to stop doing that. It will stop people trying to stop it from doing that when it starts tiling the world and reachable universe with paperclip factories using the immense capability produced by its intelligence. It won't do it because it's evil or a psychopath. It will do it because we told it to. Obviously I can't provide any concrete scientific proof to support any of this, but how does it not make sense? How is this not a rational idea? On his twitter he calls this era an era of "inadequate" AI alignment research. Inadequate- lacking the quality or quantity required; insufficient for a purpose. Do you understand? It's not enough. Yudkowsky has never given any claims on the probability of our demise, but he's got me shaking in my boots. — Unsigned, by: Samiac99 / talk / contribs

The assumption that a highly intelligent system doesn't have material constraints on its ability is pretty silly. The idea that you can just think really really hard about something and come up with not just heretofor unseen solutions, but universal solutions that overcome all obstacles is odiously simplistic and barely worth considering.
It is just a retelling of his base error in everything he does, where he reduces intelligence, understanding, skills, and abilities to a single and fungible commodity, where more=better. He does it with his futurism predictions about rate of advancement, he does it with his danger of AI, he does it with his obsession with neutropics. ikanreed 🐐Bleat at me 16:13, 1 March 2019 (UTC)
I'd have to agree. That is a lot of "ifs" to really argue that he's making a point. So far what I see from Eliezer is just his fears not matching reality. You even said "I can't provide any scientific proof but how could it not make sense" already says more than enough. Bear in mind he has no degrees in anything he talks about so all of it should be met with heavy skepticism.Yokaiwatch (talk) 01:46, 15 May 2025 (UTC)
"Tell a General AI to make paperclips, and it will do just that." Why? That is now how actually existing intelligences like dogs or humans or bureaucracies behave (Stafford Beer's ghost is waiting in the wings). Its also not how complex information processing systems like LLMs behave, they take in tokens and emit statistically expected tokens which can seem like obeying instructions in the same way that a carnival cold-reader can seem to speak to the dead. If we want to repeat a process one time after another, we create a deterministic machine or strip out as much intelligence as possible. Yud must have read pop science books from the Cold War which show that technology often follows a S-curve. Some of those argued "maybe S-curves will keep adding on to each other and we will go from railroads to interstellar spacecraft in 150 years" but all that stopped around 1970. ikanreed's point that Yud is obsessed with IQ Ăźber Alles (and thinks he has the highest IQ) is a good one since his only achievements in life are posting on the Internet, fundraising, and collecting spouses and play partners. His intelligence has not lead to notable achievements except persuading people to give him money and admiration.Polydamas (talk) 15:17, 15 May 2025 (UTC)

Taxouck's big rant and reply to the post above[edit]


Information about SIAI[edit]

This lesswrong post-https://www.lesswrong.com/posts/qqhdj3W3vSfB5E9ss/siai-an-examination, tells us about the expenses and revenue of the organization MIRI. Here in this place http://images.lesswrong.com/t3_5il_7.png?v=3b5b3af66c23a90359308a68b369f487 I founf that Jeffrey Epstein donated $50000. Don't think it is the same Jeffrey Epstein? Well, see this https://www.nytimes.com/2019/07/31/business/jeffrey-epstein-eugenics.html?auth=login-email&login=email. — Unsigned, by: Teerthaloke101 / talk / contribs Requesting thread archival (why?) Plutocow (talk)

Rowling And Yudkowsky[edit]

Being transgender I am probably biased but I think we owe this guy an apology with respect with the HUGO awards and HPMOR stuff.

I mean we (by we I mean the rational community) used to think that Rowling was an amazing ally and that Yudkowky views were outdated and evil and while a lot of the stuff he has said is problematic and awful, when you compare it with the stuff that Rowling herself has said and done it really opens up a new picture

Yudkowsky has said: Written that post about the DNA Cricket controversy Written the eye dust post about torture and utilitarianism Written about the future society where rape is legal Suggested everyone should use he/him pronouns

Yet other than blogging his actions had been surprisingly tame and I actually that he has become a nicer person.

Compare it to Rowling and

Her antisemitic goblin characters Her constant derision of ugly and fat(Umbridge) people that Yudkwosky himself criticizes in HPMOR Her antitrans mockery like we see with Rita Skeeter and her manly hands and the Cormoran Strike Trans character The crossdresssing murderer in her Cormoran Strike that she swear is not trans-phobic The names she gives to minorities, like Cho Chang Her association and support of far-right anti-abortion gender critical feminists And the money and platform she has donated to this causes. She generally describes trans people as nothing but penised individuals that want to peek into women's bathrooms and all the unsavory "trans people are perverts" stuff

Honestly we mocked Yudkwosky for writing a very long self-insert and for being to thin skinned and never backing down and yet recently Rowling wrote a 1200 page book with a self-insert character where she whines about people being mean to her on twitter(for all the awful stuff she said) for several hundred pages!

Like I think the roles have reversed and while I cannot support everything has done he has come out as a better human being in general as of late. — Unsigned, by: 185.108.105.153 / talk

Honestly, despite being critical of Yudkowsky's transhumanist views, the only real moral failings I can ascribe to him is his meltdown over Roko's Basilisk, and having an ego the size of a planet while professing humility. Most of his flaws are human and relatable; the bad parts of his writing are, at most, innocently insensitive coming from a detached scholar with difficulties in relating to run-off-the-mill people, which is a common occupational hazard for philosophers in general. He has consistently shown a history of improving and wanting to do better. - Linneris (talk) 22:32, 10 September 2022 (UTC)
I mean, you know what his failings are: Being a Weirdo On The Internet, someone for the people around here to feel superior while mocking. If they can dig up actual reasons to call him out, so much the better, but the Weirdo factor is first and foremost. — Chbarts (talk) 06:18, 27 March 2023 (UTC)

So did Yudkowsky write this page?[edit]

It seems like his cult has been the primary editors of this page. — Unsigned, by: 130.250.144.221 / talk

Private life[edit]

I added a brief note on Yud's sex life. That is a deep dark rabbit hole and not really the RW house style but I think he is pretty open that he uses his role in LessWrong to recruit women for kink/sex/romance (and the article already has a long section on fetish content in his fanfic). He certainly shares his tastes in the same places he promotes his cult eg. https://x.com/ESYudkowsky/status/1433112387322662919 or https://www.reddit.com/r/SneerClub/comments/12nv8ti/supergenius_lifehack_use_your_cult_as_distributed/ At one point the party line was that the math fetish was a game with a specific partner but I think the Pravda has changed https://web.archive.org/web/20190705091425/https://cptsdcarlosdevil.tumblr.com/post/183220377753/a-k-a-l-t-y-n-i-go-away-for-5-minutes-and-rat "Some altruist!" feels like a RW project, dissecting specific relationships does not Polydamas (talk) 06:41, 23 May 2025 (UTC)

It would seem to be more evidence of cult behavior, so missional for that reason at least. Bongolian (talk) 07:04, 23 May 2025 (UTC)
If anyone has a smoking gun about where he picks up (or picked up) dates feel free to add it. Most of the stories about how older men in the community make passes at younger women avoid naming names or mention people like Michael Vassar whom the rest of LW can deny if necessary. His leaked online dating profile boasted about his leadership of LW and busy polyamorous life (no direct link because I'm not sure he wanted it to be public) and he does not seem to have any mainstream hobbies where he would meet women IRL. Polydamas (talk) 15:38, 23 May 2025 (UTC)