Talk:Eliezer Yudkowsky/Archive1
Eliezer Yudkowsky Facts[edit]
That Eliezer Yudkowsky Facts page is the most disturbing thing I have read in my life. I don't need a shower, I need the outer layer of my skin peeled off. - π 02:43, 15 November 2010 (UTC)
- Why? They're just screwing around. Tetronian you're clueless 02:47, 15 November 2010 (UTC)
- It is fanboyism at a disturbing level. - π 02:55, 15 November 2010 (UTC)
- If you think that's scary, check this out. Oh, and I agree with Armond's reaction in the thread I just linked to - LW is just joking around, although doing so does imply that there is at least a small personality cult. Tetronian you're clueless 02:58, 15 November 2010 (UTC)
- I find the "EY Facts" thing to just be hilarious. Maybe it's just pitched right for my sense of humour. And to be honest, I think the page isn't too bad as a way to "get in" to the ideas pitched at LW. E.g., even if you know nothing about the mistaking the map for the territory fallacy, the factoid about it explains it pretty well and pretty quick in a way that just sticks with you. theist 09:36, 15 November 2010 (UTC)
- I get why the jokes are funny, it is the way the a salivating over his crotch like a bunch of Justin Bieber fans and he is hosting this shit on his website that disturbs me. If anyone started a page like that about Trent I would have no qualms with deleting the page and perma-banning the participants, even blocking their IPs from acessing the server, before this site turns into that circle jerk. - π 12:00, 15 November 2010 (UTC)
- I find the "EY Facts" thing to just be hilarious. Maybe it's just pitched right for my sense of humour. And to be honest, I think the page isn't too bad as a way to "get in" to the ideas pitched at LW. E.g., even if you know nothing about the mistaking the map for the territory fallacy, the factoid about it explains it pretty well and pretty quick in a way that just sticks with you. theist 09:36, 15 November 2010 (UTC)
- If you think that's scary, check this out. Oh, and I agree with Armond's reaction in the thread I just linked to - LW is just joking around, although doing so does imply that there is at least a small personality cult. Tetronian you're clueless 02:58, 15 November 2010 (UTC)
- It is fanboyism at a disturbing level. - π 02:55, 15 November 2010 (UTC)
- Trent Toulouse got roundhouse kicked by Chuck Norris, and RationalWiki went down again.
- Trent Toulouse doesn't do pushups ... he just doesn't.
- When Andrew Schlafly goes to sleep every night he checks his closet for Trent Toulouse. - David Gerard (talk) 17:48, 15 November 2010 (UTC)
- Trent Toulouse once reset the RationalWiki server from Australia with nothing more than a piece of string and a paper clip.
- RationalWiki's editors are just vandals that are too scared of Trent Toulouse to do any vandalism.
- Trent Toulouse doesn't need to upgrade Media Wiki, he just growls at the server until it works again.
- Trent Toulouse was once at the centre of a massive circle jerk. We called this event the global flood. theist 21:22, 15 November 2010 (UTC)
- Y'all creep me out sometimes. Tmtoulouse (talk) 08:45, 22 November 2010 (UTC)
- Nonsense. Trent Toulouse can't be creeped out. Even by Trent Toulouse. Making a rapey face. While horror movie soundtracks play in the background. theist 09:17, 22 November 2010 (UTC)
Fan Fiction[edit]
"but as of yet there doesn't seem to be any fan fiction of Yudkowsky" - actually, there's this... --PeerInfinity (talk) 19:10, 18 November 2010 (UTC)
- That seems very, very special. theist 02:15, 19 November 2010 (UTC)
Crank?[edit]
Would it be a stretch to call Yudkowsky a crank?
- He is self-educated with no academic degrees.
- He lacks peer-reviewed publications, or any publications whatsoever outside of his own websites and fanfiction.net.
- He is derisive of mainstream science, accusing it of collective bias and blindness.
- He has been pushing fringe ideas that are treated, at best, with skepticism by mainstream science, including attempts to prove a particular interpretation of quantum mechanics while, by definition, no particular interpretation can be singled out empirically as correct.
- He has formed a bit of a doomsday cult around himself, which holds the AI apocalypse to be near and treats cryonics and donating to SIAI as the path for salvation for the chosen. This bit may be unintentional, though, in a Life of Brian kind of way.
- LucidFox (talk) 19:49, 13 August 2011 (UTC)
- Yes and no. He has those WTF moments but I prefer the old line about him tap-dancing on the line between genius and insanity. ADK...I'll explicate your air conditioner! 22:19, 13 August 2011 (UTC)
- Yes, but Yudkowsky's behavior also resembles that of intelligent cult leaders with interests in science and in fringe philosophies. Yudkowsky creeps out a friend of mine because he reminds this friend of his bad experience with Andrew Galambos's cult in the 1970's. Yudkowsky also reminds me of what I've read of Keith Raniere, a similar sort of Wunderkind founded the NXIVM cult: http://www.timesunion.com/local/article/Secrets-of-NXIVM-2880885.php Advancedatheist (talk) 02:17, 11 April 2012 (UTC)
- Fascinating read. Keith Raniere seems like a much more capable variation of that theme though.
Well, his MWI posts are not totally terrible as there is no scientific consensus on this topic (and many prominent scientists do support MWI). His AI effort, however, is truly awful. He has no CS qualification, nor has he properly learnt it on his own. He doesn't understand the big-O, and all his talks about AIs concern the AIs which are utterly and completely non-implementable in our universe in the way that can not be fixed - both when discussing the unfriendly AI risks, and especially when discussing the FAI. Simply put, in our universe you can't predict jack shit. If you predict weather for 3 days using 10^15 operations, you will predict weather for 6 days using 10^30 operations (assuming equally crappy implementation. Making implementation less crappy might stretch it to 8 days). Dmytry (talk) 06:46, 26 March 2012 (UTC) (ahh and I'm a programmer, not quantum physicist. I don't know if he's making a lot of errors in quantum mechanics or not, just pointing out that MWI isn't very crank. The AIs stuff... the Terminator fan fiction. No relevance to real AIs that operate in limited hardware. Learn, he must.)
- The thing about the MWI posts is that there's always the get-out-of-jail-free card of being a "philosophical position", which means you don't need no crummy evidence to say that your interpretation is correct. Hence where all this probability theory comes in. While that's all well and good, there's a distinction between "it's 90% probable that" and "we're 90% confident at given certain evidence" - because in the latter a piece of evidence can come along and blow that figure down to 10% confident. Yudkowsky sometimes makes this distinction clear when teaching probability theory, but I'm not convinced he makes it all too clear when teaching about cryonics, physics and so on, the areas where you might well be justified in calling him a crank. bomination 10:50, 26 March 2012 (UTC)
- Well, I guess. With MWI - we don't know how quantum mechanics combines with gravity, and the gravity is only apparent at large scales, so we have some totally unknown phenomena here that we should strongly expect to overthrow much of what we know (especially things like MWI). It's not even a case of expecting MWI by default. It's a case of expecting QM to be substantially incomplete and/or wrong. The probability theory posts are very elementary stuff, and the non-stupid people solve the probability problems correctly without ever have heard of Bayes. That is not to say one shouldn't preach elementary stuff. One absolutely should. Most people really suck at probability. Dmytry (talk) 17:14, 28 March 2012 (UTC)
- I don't know if "crank" is the right word so much as "religious" when it comes to the transhumanist stuff. In terms of academic training, I often think a good amount of time spent in the library can mean much more than that piece of paper called a "degree." On the other hand, it's also easy to seal yourself off in an intellectual bubble as an autodidact. Yudkowsky's lack of formal training in cognitive science comes through when he starts pushing his hyper-computationalist/reductionist line (which also explains his affinity for Pinker-ian evo psych). I don't think some of the positions he's advocating are wrong per se, but he defends them poorly with lots of handwaving and straw manning of critics. He comes off as one of the typical AI-types who think they also understand cognitive science as it relates to the human mind but have no time for things like neuroscience, social psychology, philosophy in general, etc. Funny thing, we were both deeply influenced by Hofstadter, but seem to have taken nearly the opposite message away. Nebuchadnezzar (talk) 17:59, 28 March 2012 (UTC)
- The AI-box experiment sounds pretty crank when you read into it, but I agree on the "religious" description to a point. It's very much the same as the simulation proponents views. "We'll make shit up, and then just wave it away as philosophical". Oh well, just stick with reading what he says on bayesian rationalism in the core LW sequences and take the rest with a good crack of salt and pepper. theist 19:21, 28 March 2012 (UTC)
- What creeps me out is not so much him but the community, the circlejerk of fear mongering, rationalizing the xenophobia in precisely the creepy way (where one invokes big abstractions like Mankind, as the AI is intrinsically not creepy enough when the world is full of non-you intelligences of whom you can't assume even rudimentary friendliness). WRT being an autodidact... Reading is not enough; it is important to do exercises. I learned programming on my own, but it is a subject where a very rude machine tells you you're wrong, a lot. Also, for the AI types, one need to distinguish the smart people who can e.g. code a self driving car or a theorem prover, and the people who seek a topic where their incompetence can't be contrasted with competence via some objective criteria. Dmytry (talk) 09:26, 29 March 2012 (UTC)
- The AI-box experiment sounds pretty crank when you read into it, but I agree on the "religious" description to a point. It's very much the same as the simulation proponents views. "We'll make shit up, and then just wave it away as philosophical". Oh well, just stick with reading what he says on bayesian rationalism in the core LW sequences and take the rest with a good crack of salt and pepper. theist 19:21, 28 March 2012 (UTC)
- I don't know if "crank" is the right word so much as "religious" when it comes to the transhumanist stuff. In terms of academic training, I often think a good amount of time spent in the library can mean much more than that piece of paper called a "degree." On the other hand, it's also easy to seal yourself off in an intellectual bubble as an autodidact. Yudkowsky's lack of formal training in cognitive science comes through when he starts pushing his hyper-computationalist/reductionist line (which also explains his affinity for Pinker-ian evo psych). I don't think some of the positions he's advocating are wrong per se, but he defends them poorly with lots of handwaving and straw manning of critics. He comes off as one of the typical AI-types who think they also understand cognitive science as it relates to the human mind but have no time for things like neuroscience, social psychology, philosophy in general, etc. Funny thing, we were both deeply influenced by Hofstadter, but seem to have taken nearly the opposite message away. Nebuchadnezzar (talk) 17:59, 28 March 2012 (UTC)
- Well, I guess. With MWI - we don't know how quantum mechanics combines with gravity, and the gravity is only apparent at large scales, so we have some totally unknown phenomena here that we should strongly expect to overthrow much of what we know (especially things like MWI). It's not even a case of expecting MWI by default. It's a case of expecting QM to be substantially incomplete and/or wrong. The probability theory posts are very elementary stuff, and the non-stupid people solve the probability problems correctly without ever have heard of Bayes. That is not to say one shouldn't preach elementary stuff. One absolutely should. Most people really suck at probability. Dmytry (talk) 17:14, 28 March 2012 (UTC)
Does "AI" even exist?[edit]
People keep referring to Yudkowsky as an "AI researcher." Since entering my 50's recently, I've noticed more and more how many of the gee-whiz technological marvels I heard about in my youth simply haven't arrived, and I suspect they will continue to remain mirages. On the threshold of that mysterious, far-future year 2012, I live in "the future of" the Space Age, nanotechnology and even artificial intelligence, yet these things either don't exist or have turned to ashes.
BTW, how does Yudkowsky make an income living in the expensive Bay Area when he didn't go to high school or college, and as far as I can tell has never had a real job? — Unsigned, by: Advancedatheist / talk / contribs
- Soft-AI does exist, but is not the stuff you read about/watch on TV. As to how he makes a living I have no clue. Nowwhat? 13:35, 15 December 2011 (UTC)
- I don't know what the day-to-day work actually is that he does, but most of it is based around the theory of how to develop artificial intelligence. You see, the reason that it hasn't materialised is because people thought that "hey, we just need to build a powerful computer, that's all!! Easy!" - obviously, this is wrong. The problem you envisage of all these marvels predicted 50 years ago not coming true is probably best answered by Nassim Taleb rather than Yudkowsky - 50 years ago we predicted space ships and robot servants... but we didn't predict the internet, iPhones, GPS in every car, computers in every home, working from home, social network sites, wikipedia... we also thought that people in the future would still be carrying their suitcases, as evidenced by the fact that cases on wheels were only invented in the late 1980s. The error is in predictive ability, not the actual advances of science; because if you had all the information to accurately predict what would be invented in the future and when, you'd have all the information you needed to actually invent them now. narchist 15:36, 15 December 2011 (UTC)
- He started SIAI. So he does in fact work for an artificial intelligence research organisation as a researcher for money. Whether they actually do anything useful towards realising artificial intelligence is a matter for those deeper in the field, but their byproducts are certainly interesting, entertaining and slightly useful - David Gerard (talk) 12:05, 26 March 2012 (UTC)
- TBH the SIAI looks like combination of uneducated guesswork and fear mongering. So far the only thing it achieved is making people less friendly [1]. Matter of time until these folks inspire next unabomber or the like. I don't like SIAI at all. Dmytry (talk) 17:38, 27 March 2012 (UTC)
- Inspiring mail bombing might be a bit of a stretch. gnostic 18:33, 27 March 2012 (UTC)
- SIAI ideas have managed to actually break a few people's brains (c.f. the Forbidden Post) when they decompartmentalise too fast and take ideas too seriously, as if humans can actually think or something - David Gerard (talk) 19:41, 27 March 2012 (UTC)
- Well, in that link i posted the novamente guy says he received a lot of mail telling how his AI is going to kill us all and shit. If you have many enough people posting that, someone will post a bomb. Its about taking too seriously a: one's own stupid thinking and b: stupid ideas that only seem true due to lack of counter argument, which is actually result of ignorance and not frigging listening to counter arguments. They created this community where it is certain that some long winded conjunction is true, and therefore AI will kill us all. If you break couple assumptions, e.g. pointing out that the AI on our hardware will be a very shitty forecaster and thus wont work like naive stupid utility maximizer, they immediately add here the AI inventing quantum computing or someshit, and upvote that. Basically, as long as one can think of an AGI that can kill us all, the SIAI would argue that it is a representative sample and the crowd be 'almost 100% sure', which, probabilistic thinking being what it is, is good enough for mail bombing.— Unsigned, by: Dmytry / talk / contribs
- I wouldn't worry about it too much. Yudkowsky has some interesting things to say, at least when he isn't proselytizing for his techno-religion. Nebuchadnezzar (talk) 06:30, 28 March 2012 (UTC)
- I dunno. I liked his writings before I looked into whole FAI issue, but seeing how he's earning money by fear mongering and what amounts to donation scam (no way he honestly believes he's making any progress with his vague handwaving and lack of any expertise on the topic), my opinion of him gone down through the drain. Dmytry (talk) 06:37, 28 March 2012 (UTC)
- Oh, I think he honestly believes it. He's definitely drinking his own Kool-Aid. Nebuchadnezzar (talk) 06:46, 28 March 2012 (UTC)
- Maybe... So for the worse. The people who are dishonest, but themselves know the truth, tend to deviate from the truth less. Anyhow, if one honestly wants to make difference on AI, one should study CS properly. And one should understand big-O notation, and not mistake the algorithms with towers of exponents in their O as representative of how algorithms without towering exponents can behave. The creation of AI is a problem of making something that runs in reasonable time, and the stuff with towering exponents is not a very good starting point (and its not difficult enough to make the AIs with towering exponents and oracles, to make it worthwhile to read about them). Dmytry (talk) 08:33, 28 March 2012 (UTC)
- Oh, I think he honestly believes it. He's definitely drinking his own Kool-Aid. Nebuchadnezzar (talk) 06:46, 28 March 2012 (UTC)
- I dunno. I liked his writings before I looked into whole FAI issue, but seeing how he's earning money by fear mongering and what amounts to donation scam (no way he honestly believes he's making any progress with his vague handwaving and lack of any expertise on the topic), my opinion of him gone down through the drain. Dmytry (talk) 06:37, 28 March 2012 (UTC)
- I wouldn't worry about it too much. Yudkowsky has some interesting things to say, at least when he isn't proselytizing for his techno-religion. Nebuchadnezzar (talk) 06:30, 28 March 2012 (UTC)
- Well, in that link i posted the novamente guy says he received a lot of mail telling how his AI is going to kill us all and shit. If you have many enough people posting that, someone will post a bomb. Its about taking too seriously a: one's own stupid thinking and b: stupid ideas that only seem true due to lack of counter argument, which is actually result of ignorance and not frigging listening to counter arguments. They created this community where it is certain that some long winded conjunction is true, and therefore AI will kill us all. If you break couple assumptions, e.g. pointing out that the AI on our hardware will be a very shitty forecaster and thus wont work like naive stupid utility maximizer, they immediately add here the AI inventing quantum computing or someshit, and upvote that. Basically, as long as one can think of an AGI that can kill us all, the SIAI would argue that it is a representative sample and the crowd be 'almost 100% sure', which, probabilistic thinking being what it is, is good enough for mail bombing.— Unsigned, by: Dmytry / talk / contribs
- TBH the SIAI looks like combination of uneducated guesswork and fear mongering. So far the only thing it achieved is making people less friendly [1]. Matter of time until these folks inspire next unabomber or the like. I don't like SIAI at all. Dmytry (talk) 17:38, 27 March 2012 (UTC)
Author avatars[edit]
Is he a would-be male-version Mary Sue (check your favourite web browser for MS test sites)? 212.85.6.26 (talk) 16:04, 15 December 2011 (UTC)
- Put it this way: you know the bit where Harry bites the maths teacher who doesn't know what an algorithm is? EY did that - David Gerard (talk) 12:10, 26 March 2012 (UTC)
- Unarguably a self-insert, but definitely not a Mary-Sue24.237.65.95 (talk) 05:21, 20 January 2013 (UTC)
Personality[edit]
DSM-IV has diagnostic criteria for his behaviour... it'd be interesting to have opinions of several psychologists. — Unsigned, by: Dmytry / talk / contribs
- Matching Internet scribblings to the DSM-IV is silly and a waste of time. That's not how the DSM-IV works - David Gerard (talk) 06:52, 26 April 2012 (UTC)
- What? But how will I ever shove faceless internet people into handy boxes now? Radioactive afikomen Please ignore all my awful pre-2014 comments. 08:00, 26 April 2012 (UTC)
- Permit me to explain: "AH SOMEONE DOESN'T AGREE WITH ME!!! THEY MUST HAVE A PERSONALITY DISORDER!!!!!" bomination 09:59, 26 April 2012 (UTC)
- Actually its an area of scientific study. Furthermore, when one sets up a charity for one's work on friendly artificial intelligence - in area where one never studied fundamentals - so to say collecting money for participation in the olympic marathon race before one learns to walk - that does beg question 'WTF?!'. It is just not normal to live off donations for your work in area where you haven't learned basics. And in so much as money collected over internet are considered an IRL affair, it's not a pure matter of 'internet stuff'. edit: David, by the way, you recall you referred to mass downvotings on LW? Those were on user who gone against the notion of doing all the charitable donations to 1 'best' charity. This is real life stuff here, because it is over the internet and money are transferred electronically doesn't make it any less the case that they are a: talking people into giving them money on faith with no accountability whatsoever, which already implies almost certain fraud (not in the least because non-fraudsters have too much reputation to lose from pulling that sort of trick without actually doing their best) and b: they are talking those who donated into diverting what they would of donated elsewhere, towards them. Dmytry (talk) 12:10, 27 April 2012 (UTC)
- Permit me to explain: "AH SOMEONE DOESN'T AGREE WITH ME!!! THEY MUST HAVE A PERSONALITY DISORDER!!!!!" bomination 09:59, 26 April 2012 (UTC)
- What? But how will I ever shove faceless internet people into handy boxes now? Radioactive afikomen Please ignore all my awful pre-2014 comments. 08:00, 26 April 2012 (UTC)
Also[edit]
This guy has no education in his field and hasn't gone to college or finished high school. Here's a thought: would you trust brx to be a part of a think thank (or in this case, a probable crank tank)--User:Brxbrx/sig 21:33, 18 June 2012 (UTC)
- Brx vs Yudthingies fan boys - irrational screaming match of epic proportions. Usually a comment by Brx alone can start a localised flame war, but pick on the weak spot point of the LessWrong cultists and fun will be had by all not involved *grabs popcorn* Pi 3:14 (talk) 07:09, 19 June 2012 (UTC)™
- I wouldn't hold lack of degree etc against him if he did something difficult with some measure of success, any time between being a newborn baby and working on the most important problem in the world. Dmytry (talk) 10:56, 19 June 2012 (UTC)
- There's a cite for it on Wikipedia if you're interested in adding it to the article. Seems notable to me.--User:Brxbrx/sig 13:44, 19 June 2012 (UTC)
- What notable enough? Handwaving in published form? How does one fail to handwave? edit: lol they removed the autodidact bit since last time I seen it. Wikipedia can't handle all-on-internet cults, those have high online presence to notability ratio. edit2: nevermind I'm blind, or I have seen it previously with the autodidact bit up on top. Anyway, I wouldn't hold the non finishing school against him. I hold lacking any technical accomplishment in a situation where he really needs one to be taken seriously, though, as pretty much proof of lack of ability of the level that he presumes he has.Dmytry (talk) 15:01, 19 June 2012 (UTC)
- There's a cite for it on Wikipedia if you're interested in adding it to the article. Seems notable to me.--User:Brxbrx/sig 13:44, 19 June 2012 (UTC)
- I've nothing against autodidacts, given that they have actually autodidacted. This guy however dismisses the scientific community on the issue, considering education on it a net negative. Additionally, he believes he has a method that works better than science. Autodidact failure here.--Baloney Detection (talk) 19:27, 19 June 2012 (UTC)
- Well, it's not like he's pretending to be a neurosurgeon, but we still don't give people without the proper training keys to the lab (although there is no lab in this case). It seems an incredible waste to put on a pedestal a man who doesn't know what he's talking about.--User:Brxbrx/sig 20:49, 19 June 2012 (UTC)
- More like, we don't pay >40k/year to people w/o them passing some sort of test (the education is a really shitty test though). Good (as in, morally decent) people in general do not even take money for work they are not qualified to do, and seek objective measure of some kind before taking money. (This also allows decent people to get more money for their research when they are qualified). Think about it, would a non-scam research charity even be able to hire him at such a pay? SI was crank before Luke came in, now it is scam as the Luke doesn't have the understanding to even be a honest crank, and with 'best intentions' would do something like renaming the whole thing to "AI Safety Centre" even though it never been about anything other than extreme singularity scenario (foom). Dmytry (talk) 15:44, 20 June 2012 (UTC)
- Well, it's not like he's pretending to be a neurosurgeon, but we still don't give people without the proper training keys to the lab (although there is no lab in this case). It seems an incredible waste to put on a pedestal a man who doesn't know what he's talking about.--User:Brxbrx/sig 20:49, 19 June 2012 (UTC)
Serious scholars in the field may dispute that he is autodidacted[edit]
What does that mean? That he didn't teach himself? Of course he did. Is the sentence claiming autodidacts are legitimate scholars, or that that the term denotes such a scholar?--"Shut up, Brx." 16:36, 18 December 2012 (UTC)
The Harry Potter fic[edit]
Given that the article leaves this uncriticized, it might be worth mentioning the frankly pretty appalling section of it that this post on the Something Awful forum picks apart (bear in mind the tasteless word censors operating on unregistered readers of SA if you don't know about them already, most importantly "surprise sex" being substituted for "rape"). Also the point about his general orientalism and whiggery (see this post later on) is very valid. --Lord Shang (talk) 03:03, 21 June 2012 (UTC)
- Seriously, what the fuck:
- And in the slowed time of this slowed country, here and now as in the darkness-before-dawn prior to the Age of Reason, the son of a sufficiently powerful noble would simply take for granted that he was above the law. At least when it came to a little rape here and there. There were places in Muggle-land where it was still the same way, countries where that sort of nobility still existed and still thought like that, or even grimmer lands where it wasn't just the nobility. It was like that in every place and time that didn't descend directly from the Enlightenment. A line of descent, it seemed, which didn't quite include magical Britain, for all that there had been cross-cultural contamination of things like pop-top soda cans.
- Even better, apparently he originally wrote Even in Muggle-land it was probably still happening, somewhere in Saudi Arabia or the darkness of the Congo. Yudkowski is a stereotypical 19th-century civilizing-mission positivist, who'd'a thunk it? --Lord Shang (talk) 03:18, 21 June 2012 (UTC)
- The civilizing mission positivism is hardly disturbing...
- The "surprise sex" is a recurring theme in EY's writing. In the three worlds collide, the rape is legalized and the obvious Mary Sue is a rapist-murderer : http://lesswrong.com/lw/yc/epilogue_atonement_88/
- "If you're going to ask that question," said the Master of Fandom, "when the answer is obviously yes, thus wasting a few more seconds -"
- "Back in the ancient days that none of you can imagine, when I was seventeen years old - which was underage even then - I stalked an underage girl through the streets, slashed her with a knife until she couldn't stand up, and then had sex with her before she died. It was probably even worse than you're imagining. And deep down, in my very core, I enjoyed every minute."
- Silence.
- "I don't think of it often, mind you. It's been a long time, and I've taken a lot of intelligence-enhancing drugs since then. But still - I was just thinking that maybe what I'm doing now finally makes up for that."
The guy genuinely creeps me out. Living off donations without accountability (donations are for FAI work, he's writing harry potter fanfic, wtf?), being admired out of proportion with accomplishments... utilitarianism too , then the fiction, then the work founded upon belief that the artificial intelligence will act like a psychopath so we must make it be a psychopath that shares our goals (even the 'oracle' is like this, if you ask their oracle how to make tea it will compute what to tell you that will result in you making tea, rather than present the information you're lacking). Dmytry (talk) 05:42, 21 June 2012 (UTC)
- Other bit: torture vs dustspecks http://lesswrong.com/lw/kn/torture_vs_dust_specks/uf7 . Combine this with his ability to rationalize anything by reasoning sloppy, and with his belief in his own importance as saviour of mankind, along with the mere numbers utilitarianism, whereby the lives of everyone that could have lived are at stake (I haven't got the link now but their AI stuff is not about 7 or 10 billions people, it's about everyone who'll be able to live not living)... doesn't look good to me. Dmytry (talk) 20:19, 21 June 2012 (UTC)
- He's like a walking caricature of Benthamite utilitarianism. --Lord Shang (talk) 20:55, 21 June 2012 (UTC)
- You seem to have stopped justifying your objections to Yudkowsky and just started linking to random posts while shouting "WHAT A CRANK!!" narchist 00:56, 22 June 2012 (UTC)
- Justify objections to his torture vs dustspecks view? Seriously? Dmytry (talk) 01:39, 22 June 2012 (UTC)
- Yes in these cases I think it's perfectly justified. (Actually I should add that on my part I don't have much of a stake in this debate and I don't consider him crazy or malign, from reading around I just consider a lot of his ideas peculiarly ill-informed and/or verging on offensive.) --Lord Shang (talk) 03:23, 22 June 2012 (UTC)
- It actually doesn't even matter if its correct to choose dustspecks or torture (a value question, replication of something trivial doesn't necessarily scale up, trillion really shitty novels wont equal Hamlet). What matters is how that answer combines with the belief in himself being saviour of not even just some 7..10 billions but everyone who'll ever live, he can save those not just from death but from AI that's slightly not the best. (edit: also keep in mind those folks aspire not to compartmentalize) Dmytry (talk) 07:06, 22 June 2012 (UTC)
- I mean, there are objections to his decision in the comments you linked to (comparing suffering divided over one individual compared to suffering over 1/3^^^3 individuals, namely, which is an apples and oranges thing), particular with the specific example, but I don't see how this links to him having some sort of messiah complex without a lot more exposition and logical connection. postate 02:03, 23 June 2012 (UTC)
- I'm not saying that this moral view implies messiah complex. I'm noting that, in addition to this moral view, he does seriously expect to be able to, through his AI mission, affect enormous number of people's lives (after all, the people will have to live together with his friendly AI till end of universe, or so he believes), and that's what makes this moral view not be some theoretical oddity but practical justification for anything in so much as he can conceive of an argument that it would improve the post-singularity state of mankind. edit: actually I may need to explain some backgrounder on how it creeps me out. Recall that soviet union was built upon utilitarian principles, and the work on a friendly system of government. It was no Nazi Germany. It was every bit as bad for first several decades. The torture vs dustspeck type decisions were made in the real world and people were tortured (And then, ironically, a lot of dust was dropped into the eyes). The problem with utilitarianism is that utilitarians believe they are utilitarian. For you, when you think of the absolute evil, probably Hitler comes to mind; the hate as direct and prime cause of evil ideology. For me, Lenin and Stalin, and the utilitarian ideology that killed something around 30 millions in the name of bright future, the ideology as cause of evil. Dmytry (talk) 06:03, 23 June 2012 (UTC)
- I mean, there are objections to his decision in the comments you linked to (comparing suffering divided over one individual compared to suffering over 1/3^^^3 individuals, namely, which is an apples and oranges thing), particular with the specific example, but I don't see how this links to him having some sort of messiah complex without a lot more exposition and logical connection. postate 02:03, 23 June 2012 (UTC)
- It actually doesn't even matter if its correct to choose dustspecks or torture (a value question, replication of something trivial doesn't necessarily scale up, trillion really shitty novels wont equal Hamlet). What matters is how that answer combines with the belief in himself being saviour of not even just some 7..10 billions but everyone who'll ever live, he can save those not just from death but from AI that's slightly not the best. (edit: also keep in mind those folks aspire not to compartmentalize) Dmytry (talk) 07:06, 22 June 2012 (UTC)
- Yes in these cases I think it's perfectly justified. (Actually I should add that on my part I don't have much of a stake in this debate and I don't consider him crazy or malign, from reading around I just consider a lot of his ideas peculiarly ill-informed and/or verging on offensive.) --Lord Shang (talk) 03:23, 22 June 2012 (UTC)
- Justify objections to his torture vs dustspecks view? Seriously? Dmytry (talk) 01:39, 22 June 2012 (UTC)
- You seem to have stopped justifying your objections to Yudkowsky and just started linking to random posts while shouting "WHAT A CRANK!!" narchist 00:56, 22 June 2012 (UTC)
- He's like a walking caricature of Benthamite utilitarianism. --Lord Shang (talk) 20:55, 21 June 2012 (UTC)
James D. Miller's new book gushes all over Yudkowsky.[edit]
James D. Miller's new book Singularity Rising amounts to an advertisement for Eliezer Yudkowsky's greatness and why you should donate to the Singularity Institute to fund Yudkowsky's efforts to save humanity from randomly programmed AI's by getting Yudkowsky's imaginary friendly AI out there first. Reading about Miller's infatuation with Yudkowsky made my skin crawl. If Miller hadn't mentioned in the book that he has a wife and a kid, I would have to ask if he has ever even kissed a girl. http://www.amazon.com/Singularity-Rising-Surviving-Thriving-Dangerous/dp/1936661659 Advancedatheist (talk) 01:46, 12 October 2012 (UTC)
- Saw that Luke Muehlhauser endorses it on the Amazon page. Small wonder.--Baloney Detection (talk) 19:24, 4 November 2012 (UTC)
James D. Miller was research adviser to Singularity Institute, of Eliezer. — Unsigned, by: 103.226.85.215 / talk / contribs
That IQ tests reliably measure intelligence[edit]
This is not a "controversial" position in psychology. The only controversy around this topic exists in the popular sphere, for those that don't understand the research or for people who want to invent unique defintions of intelligence. It has been accepted for over 100 years, since Spearman and Thurstone, that intelligence can be measured by intelligence tests. I don't know much about LW, I have only briefly been on ther site and read through some of their gushings about Bayes Theorem, but they are not wrong in this regard. I'll remove this soon unless anyone objects and wants to enlighten me as to where this supposed controversy exists? Tielec01 (talk) 23:26, 17 December 2012 (UTC)
- To my knowledge, skepticism about such tests is nearly universal in psychology. Can you cite any standard textbooks on the matter to show that yours is the prevailing view?--talk 23:32, 17 December 2012 (UTC)
- Start at Wikipedia. Then check out What Is Intelligence by James Flynn or What Intelligence Tests Miss by Keith Stanovich for some challenges to the validity of IQ testing, both of which were fairly well-received in scientific journals, the former actually being a classic study in intelligence. Nebuchadnezzar (talk) 23:52, 17 December 2012 (UTC)
- Thank you Nebuchadnezzar. Tielec01 (talk) 04:52, 18 December 2012 (UTC)
- On a technical level the use of the word reliability has a specific meaning and most intelligence tests meet the threshold to be considered reliable (test-retest, cronbachs alpha etc...). Obviously this is not what we are talking about but it's worth mentioning. I appreciate the links Neb, but neither do much to shake the foundations of intelligence testing rather they point to rifts within the testing community. For example: is there a cultural bias to IQ tests, why are there black/white differences in IQ, how many types of intelligences are there, is intelligence heritable, how g-loaded are certain tests, what is the effect of retesting; these are all controversies within the IQ field in a similar way that punctuated equilibrium is a rift in the field of biology that goes no way to demonstrating any controversy in the scientific community about evolution. Yes, there are areas that need much more research but it is unlikely given the research to date that we will eventually discover that IQ tests were not a reliable measure of what has been defined as intelligence.
- In terms of being 'generally accepted' I guess it's a hard thing to prove. The APA has long supported IQ testing and released a comprehensive study on the topic [2]. Among intelligence experts it has widespread acceptance [3] (of course there are disagreements amongst them). It is accepted as a useful predictor for a whole host of outcomes. It is the best predictor of work performance [4] and it is accepted by universities and organisations internationally as useful in this capacity [5].
- I hope this isn't too verbose for you both, does this demonstrate that at the very least it is well accepted that IQ tests measure intelligence by scientists and practitioners regardless of any personal misgivings you may or may not have about them? As a psychologist and a researcher in the field I have certainly not encountered this 'universal skepticism' that you have AD, but then I have only been at a couple of universities in Australia, so who knows... Tielec01 (talk) 05:28, 18 December 2012 (UTC)
- Awesome, thanks. I'm wrong! I never got beyond Psych101 so I guess that's not a surprise:)--talk
- Okay, I think this is way beyond what the sentence was intended as, which is a dig at the Mensa-esque circle jerk mentality at LW. However, we are getting into issues like whether some unitary intelligence ("g") exists, which I would deny. (See Cosma Shalizi's great post on g as a statistical artifact.) I don't think anyone would deny that IQ tests have their place and do measure certain cognitive faculties and skills. But many psychologists would reject any simple relationship between IQ tests and "intelligence" (whatever that may be). Again, these issues are way above the way IQ is treated on LW, which is basically "Look at me, I am teh smartz!!11!!" Nebuchadnezzar (talk) 06:12, 18 December 2012 (UTC)
- I should also mention that in ruling out causes for the Flynn effect, Flynn himself explicitly notes: "The hypothesis that best fits the results is that IQ tests do not measure intelligence but rather correlate with a weak causal link to intelligence." That's not really a minor quibble. Nebuchadnezzar (talk) 06:24, 18 December 2012 (UTC)
- I don't know how LW portray IQ testing, and it's possible they completely get it wrong. However, I think they would be in agreement with a Stanovich style distinction between hardware and software; indeed that seems to be the whole point of the club. So it may be that they are more in agreement with you on this topic than you realise Neb? Anyway, we are in vicious agreement that there are some flaws in intelligence testing that need to be ironed out. Let's park any further argument on that for now; can we agree that, in the company of the other items on that list of controversial beliefs, intelligence testing stands out as the least 'controversial' and might warrant removing?Tielec01 (talk) 06:41, 18 December 2012 (UTC)
- Not least there's a difference between using IQ as a proxy measurement in a statistical sense over a population and using it on an individual. Same with BMI. You cannot use criticism of its use in one sphere as criticism against its use in another sphere. moral 15:08, 18 December 2012 (UTC)
- "...intelligence testing stands out as the least 'controversial' and might warrant removing?" As far as that list goes, I agree -- it's way less controversial than the Singularitarian doomsday crackpottery. Nebuchadnezzar (talk) 15:52, 18 December 2012 (UTC)
- Not least there's a difference between using IQ as a proxy measurement in a statistical sense over a population and using it on an individual. Same with BMI. You cannot use criticism of its use in one sphere as criticism against its use in another sphere. moral 15:08, 18 December 2012 (UTC)
- I don't know how LW portray IQ testing, and it's possible they completely get it wrong. However, I think they would be in agreement with a Stanovich style distinction between hardware and software; indeed that seems to be the whole point of the club. So it may be that they are more in agreement with you on this topic than you realise Neb? Anyway, we are in vicious agreement that there are some flaws in intelligence testing that need to be ironed out. Let's park any further argument on that for now; can we agree that, in the company of the other items on that list of controversial beliefs, intelligence testing stands out as the least 'controversial' and might warrant removing?Tielec01 (talk) 06:41, 18 December 2012 (UTC)
- Thank you Nebuchadnezzar. Tielec01 (talk) 04:52, 18 December 2012 (UTC)
- Start at Wikipedia. Then check out What Is Intelligence by James Flynn or What Intelligence Tests Miss by Keith Stanovich for some challenges to the validity of IQ testing, both of which were fairly well-received in scientific journals, the former actually being a classic study in intelligence. Nebuchadnezzar (talk) 23:52, 17 December 2012 (UTC)
- I don't think I have any particular chance of convincing y'all to do anything about the rest of this page, but even for that low standard, the part about "memetics" jumped. Since when have I been advocating this? I'm not particularly a fan now, and don't think I've been in the past, though memory fades. -- Eliezer Yudkowsky
- Please feel free to create an account or to suggest revisions; contrary to what you might think, we are interested in good articles that reflect reality to the best of our ability.
- Anyone willing to revise this one, incidentally? I'd do it, but for a few weeks editing will continue to be onerous because of my arm.--talk 03:24, 19 December 2012 (UTC)
- This whole discussion suggests to me that it might be worth having an article on g. — Unsigned, by: ORavenhurst / talkDo You Believe That? 13:46, 19 December 2012 (UTC)
- IQ presently redirects to High IQ society, so go and write it - David Gerard (talk) 14:48, 19 December 2012 (UTC)
- This whole discussion suggests to me that it might be worth having an article on g. — Unsigned, by: ORavenhurst / talkDo You Believe That? 13:46, 19 December 2012 (UTC)
His 'autobiography'[edit]
Jesus Christ, it's unbearable (link in the article). Highly recommended reading; contrary to what our article says he claims many, many times to be a genius, both literally and in a more circumspect manner. Highlights: Defending 'Buffy the Vampire Slayer' from its critics, his story about how he kicked two kids ass in high school (secret weapon: knowledge of the solar plexus) and learning that his intuitions are 'always right'. Gold. Tielec01 (talk) 01:35, 14 May 2013 (UTC)
- He's a fan of Buffy? I may have to rethink my opinions on LW--"Shut up, Brx." 02:16, 14 May 2013 (UTC)
- At least he was embarrassed enough to let it fall off the Net - David Gerard (talk) 06:54, 14 May 2013 (UTC)
- Another professed genius without any formal education on his beloved subject matter. Brilliant. Osaka Sun (talk) 07:18, 14 May 2013 (UTC)
- Complementary reading is googleable by key phrase "complete strategic altruist" with quotes. Dmytry (talk) 09:40, 25 June 2013 (UTC)
Unprofessional hack job.[edit]
That's directed at whoever writes the articles on this site.
First of all it should probably be said that Yudkowsky is a brilliant writer and thinker. That being said, many of his positions are indeed extremely speculative.
The fact that you list "The Singularity is near!" as one of his beliefs is a pretty poor reflection upon the other things that are written here. He does not necessarily believe that the Singularity is near. If you ever read him beyond a superficial level this is quite clear. What he does say, is that we need to expand our probability estimate bounds for AIG. This means that it could occur soon (20-30 yrs) or take a much longer time (200+ years.) His point is that, since we are uncertain of the timing of such an event, we should be cautious.
For a "rational" site, the editors deal very unprofessionally with any subject that is even remotely controversial. And no I will not edit it, because it's not worth my time.
- I'm glad it's Friday because you just gave me a reason to drink! Tielec01 (talk) 06:47, 12 July 2013 (UTC)
- agreed. and seriously the term "formal education" is used as if that by itself destroys any value he might have. as someone who went through formal education, it is not all it's cracked up to be. perhaps i havent seen enough of his work, but I also don't perceive him to detest mainstream science (disagree with some of academia for sure though), from what I've read he considers the scientific method insufficient and not fallacious, and he's stated that anything he's written before 2001 he does not necessarily agree with. The fanboy mannerisms that you see on LessWrong are displayed just as powerfully here in terms of the anti-LessWrong circlejerk. But good strawmanning guys - keep it up!
- "it should probably be said that Yudkowsky is a brilliant writer and thinker". You think the article should say that? "Singularity is near": formerly he believed it to be around 2015 or something, he got a bit more vague on the timeframe. I for one think that "it could occur within 20..30 years" is, too, a form of "Singularity is near!". Science: read "the dilemma: science or bayes" and related material Dmytry (talk) 08:10, 15 August 2013 (UTC)
May be worth updating some bits[edit]
A lot of stuff he promotes has mutated a bit in unimportant ways, e.g. RW article mentions "singularity is coming" a lot, but the current line is mostly that the singularity is not coming too soon because other AI researchers are stupid. Also, between the, ehm, followers, there's a definite "RW is so full of lies - it says that Oceania is at war with Eastasia, but Eastasia are our allies and we are actually at war with Eurasia" thing going on. Dmytry (talk) 19:23, 29 August 2013 (UTC)
On motorcycles[edit]
LW was discussed in the comment section of Manboobz, of all places, and one of the regulars described his experiences with Yudkowsky, including this gem:
He then tried to convince us (using his version of Baysian reasoning) that motorcycles are irrational.
:D The whole discussion is worth looking at, particularly as an example of how LW/EY are perceived outside of RW/LW.--ZooGuard (talk) 20:02, 1 September 2013 (UTC)
- It's 1100 comments, but they're actually not shit. I'm amazed. And yes, they get onto Roko's basilisk [6] - David Gerard (talk) 22:58, 1 September 2013 (UTC)
Evolutionary Psychology[edit]
Why does the section about Yudkowsky's controversial positions include "Evolutionary Psychology is an established science"? Is this not the case?--P3A58NT86 04:08, 27 January 2014 (UTC)
It's not the case, no. (At least, not the Santa Barbara school just-so-stories that get all the attention in the popular press). 96.246.65.142 (talk) 06:03, 17 February 2014 (UTC)
Paper at AGI-11[edit]
Apparently Yudkowsky did get a paper presented at the Fourth Conference on Artificial General Intelligence in 2011 [7], and (according to the site at least) it was peer-reviewed, (although since the reviewers are anonymous, they may be nuts too; I know next to nothing about AI, but this conference at least seems like it had some pretty heavy researchers there). It apparently was also published with all the rest of the conference papers in the Lectures in Artificial Intelligence series [8]. I'm putting this here so someone much smarter and better at editing than me can decide whether it should be in the article or not, as I'm not very good with writing.24.158.49.224 (talk) 03:23, 2 April 2014 (UTC)
Factual problems[edit]
This article's in a bit of a sad state. For starters:
- 1. "Yudkowsky is entirely unpublished outside of his own foundation and blogs" - Yudkowsky has published articles in The Cambridge Handbook of Artificial Intelligence, Artificial General Intelligence, Proceedings of AGI 2011, and Global Catastrophic Risks.
- 2. "Yudkowsky believes an AGI will be based on some form of decision theory (although all his examples of decision theories are computationally intractable) and/or implement some form of Bayesian logic (same problem again)." - Yudkowsky doesn't think we'll build a computationally intractable AI. He thinks decision problems and Bayesian updating are useful simplifications for toy models, because they are ideals approximated by more efficient algorithms.
- 3. "actual AI research isn't based on implementing unimplementable functions" - Same problem. If the idea here is that talking about Bayesian updating is silly, then RationalWiki is firmly aligning itself with fringe science here.
- 4. "if an AGI learns from human mentors (like HAL in 2001) it will pick up human ethics from them" - Not even all humans learn ethics adequately. (E.g., serial killers.) Claiming that all possible AGIs will be ethical if they have human mentors to emulate is speculative, even more so than anything Yudkowsky has said. On the other hand, if this is merely asserting that it's possible for an AGI to learn morality over time from humans, then I'm not seeing what the criticism of Yudkowsky is; that's the same as the approach Yudkowsky advocates, though he also thinks it's a very difficult task.
- 5. "Bayesian probability can be applied indiscriminately." - This is the opposite of Yudkowsky's view. From When (Not) To Use Probabilities: "If P != NP and the universe has no source of exponential computing power, then there are evidential updates too difficult for even a superintelligence to compute - even though the probabilities would be quite well-defined, if we could afford to calculate them. So sometimes you don't apply probability theory. Especially if you're human, and your brain has evolved with all sorts of useful algorithms for uncertain reasoning, that don't involve verbal probability assignments. Not sure where a flying ball will land? I don't advise trying to formulate a probability distribution over its landing spots, performing deliberate Bayesian updates on your glances at the ball, and calculating the expected utility of all possible strings of motor instructions to your muscles." See also Beautiful Probability and this comment.
-Mnemo (talk) 15:47, 9 June 2014 (UTC)
- Apart from a wee tiny bit of whitewashing, that edit was fine. Do you have the direct cites for the published work, and we can add that back in? Don't click here 16:49, 10 June 2014 (UTC)
- What do you think was being whitewashed? I'm happy to talk about my edit. I added citation links for a lot of new criticisms of Yudkowsky, and you can find full citation information for a lot of this stuff on MIRI/SI/SIAI's research page.
- I know RationalWiki is a humor site, but it also provides some actually useful information about a lot of topics, and it'd be great if this became one of those topics. Pro- and anti-Yudkowsky folks alike should favor making the criticisms on this page well-reasoned and scholarly enough to be taken seriously by readers, especially if we can retain some of the lulz. -Mnemo (talk) 19:09, 10 June 2014 (UTC)
- Apart from a wee tiny bit of whitewashing, that edit was fine. Do you have the direct cites for the published work, and we can add that back in? Don't click here 16:49, 10 June 2014 (UTC)
- Whitewashing: it was the bit where you took out that he was an unqualified autodidact that I went "yeah, no, may come back and fix later". Don't do that - David Gerard (talk) 23:16, 10 June 2014 (UTC)
- Ah, thanks for letting me know. I removed "Yudkowsky is entirely unpublished outside of his own foundation and blogs and never finished high school, much less done any actual AI research." (and changed "AI 'research'" to "AI claims") because it wasn't cited and seemed to be a mixture of false and misleading. The first clause seems unaware that Yudkowsky has presented papers at academic conferences and been independently published; the second clause implies that Yudkowsky is a high-school dropout, which to my knowledge isn't true; the third makes a strange and overly strong claim. Similarly unreferenced and misleading is "he has no training in the subjects he writes about". "Training" isn't the same thing as formal training, or training at accredited institutions, and theoretical computer scientists (including mistaken ones) can still be "researchers". -Mnemo (talk) 23:33, 10 June 2014 (UTC)
- There are actual qualifications and training that people who actually work with AI frequently have, and he not only doesn't have them, he actively scorns them - David Gerard (talk) 09:58, 11 June 2014 (UTC)
- Sure. Let's include that point, with references, in place of the false statements currently in the article. 'Yudkowsky lacks formal training in artificial intelligence', for example, is perfectly accurate. -Mnemo (talk) 04:22, 13 June 2014 (UTC)
- "... and openly scorns it in others" is also perfectly accurate, and needs noting - David Gerard (talk) 09:29, 13 June 2014 (UTC)
- Yup, agreed. -Mnemo (talk) 20:07, 13 June 2014 (UTC)
- Ohh, it's "theoretical computer science". I see. And what, pray tell, had he done in the field of theoretical computer science? Look, there's autodidacts who for one reason or the other couldn't attend an accredited institution. They're few but they exist. And there's the much more common variety, "autodidacts" who couldn't study. They read popularization books, mistake various semantic manipulations for the actual scientific process within the subject, and proceed to write out some sort of Cognitive-Theoretic Model of the Universe that laymen mistake for theoretical physics, or in that case, theoretical computer science. Dmytry (talk) 07:11, 14 June 2014 (UTC)
- Most of his writing is informal, though interesting from a formal perspective (e.g., the decision theory stuff). It looks like there are a few formal ones, like Tiling agents and Program equilibrium. Do you have a special objection to this work, e.g., evidence it contains novice errors? Unreferenced armchair speculation about Yudkowsky's personal work ethic seems a lot less useful than textual evidence that he doesn't know what he's talking about. -Mnemo (talk) 01:52, 15 June 2014 (UTC)
- Here's his own forum drawing blanks shortly before SIAI/SI/MIRI started hiring somewhat competent people to "co-author" papers with. As for highschool bit and to familiarize yourself with the subject before editing articles, you should at least read his autobiography (linked on the bottom of the page). Dmytry (talk) 20:00, 17 June 2014 (UTC)
- The autobiography doesn't support the high school claim on RW. In any case, dismissing papers because they're co-authored seems like moving the goal-posts. You can try to rationalize the skepticism by asserting that Yudkowsky secretly played no role in the papers' writing, but that's an extreme allegation, and unless you have supporting evidence for it it doesn't seem like the sort of thing RW should go out on a limb to assert. -Mnemo (talk) 07:17, 30 June 2014 (UTC)
- Here's his own forum drawing blanks shortly before SIAI/SI/MIRI started hiring somewhat competent people to "co-author" papers with. As for highschool bit and to familiarize yourself with the subject before editing articles, you should at least read his autobiography (linked on the bottom of the page). Dmytry (talk) 20:00, 17 June 2014 (UTC)
- Most of his writing is informal, though interesting from a formal perspective (e.g., the decision theory stuff). It looks like there are a few formal ones, like Tiling agents and Program equilibrium. Do you have a special objection to this work, e.g., evidence it contains novice errors? Unreferenced armchair speculation about Yudkowsky's personal work ethic seems a lot less useful than textual evidence that he doesn't know what he's talking about. -Mnemo (talk) 01:52, 15 June 2014 (UTC)
- Ohh, it's "theoretical computer science". I see. And what, pray tell, had he done in the field of theoretical computer science? Look, there's autodidacts who for one reason or the other couldn't attend an accredited institution. They're few but they exist. And there's the much more common variety, "autodidacts" who couldn't study. They read popularization books, mistake various semantic manipulations for the actual scientific process within the subject, and proceed to write out some sort of Cognitive-Theoretic Model of the Universe that laymen mistake for theoretical physics, or in that case, theoretical computer science. Dmytry (talk) 07:11, 14 June 2014 (UTC)
- Yup, agreed. -Mnemo (talk) 20:07, 13 June 2014 (UTC)
- "... and openly scorns it in others" is also perfectly accurate, and needs noting - David Gerard (talk) 09:29, 13 June 2014 (UTC)
- Sure. Let's include that point, with references, in place of the false statements currently in the article. 'Yudkowsky lacks formal training in artificial intelligence', for example, is perfectly accurate. -Mnemo (talk) 04:22, 13 June 2014 (UTC)
Yudkowsky hates hates hates RW's guts[edit]
- Thread title modified by Weaseloid.--ZooGuard (talk) 11:03, 30 June 2014 (UTC)
Someone mentioned Roko's basilisk a couple of days ago on reddit, resulting in Yudkowsky venting some... criticism of RW. I'm surprised DG hasn't added this to RW:PISSED (as he appeared in the comments). Thread link.
Choice snippets:
RationalWiki hates hates hates LessWrong because they think we think we're better than they are on account of being all snooty and mathematical and knowing how to do probability theory (note: RW is correct about this, I consider them undiscriminating skeptics) so they lie about us and have indeed managed to trash our reputation on large parts of the Internet; apparently a lot of people are expecting lies like this to be true and no documentation is necessary. (Disclaimer: I have not recently checked their page to see if lies are still there, and it is a wiki.)
And about Effective altruism:
I submit to you all that by far the best reason why folks at RationalWiki would act like this toward some of the clearest-cut moral exemplars of the modern world, often-young people who are donating large percentages of their incomes totaling millions of dollars to fight global poverty (in ways that Givewell has verified have high-quality experiments testifying to their effectiveness), when RWers themselves have done nothing remotely comparable, is precisely that RWers themselves have done nothing remotely comparable, and RW hates hates hates anyone who, to RW's tiny hate-filled minds, seems to act like they might think they're better than RW.
What RW has to say about effective altruism stands as an absolute testimonial to the sickness and, yes, outright evil, of RationalWiki, and the fact that RW's Skeptrolls will go after you no matter how much painstaking care you spend on science or how much good you do for other people, which is clear-cut to a far better extent than any case I could easily make with respect to their systematic campaign of lies and slander about LessWrong.
--ZooGuard (talk) 09:52, 27 June 2014 (UTC)
- Though Gerard and I seem to be currently at odds over some things, it's stuff like this that make me happy that he's a member here. Reckless Noise Symphony (talk) 09:57, 27 June 2014 (UTC)
- RW is larger, growing well, normal people actually read and use it, and anyone else in the world cares about it. Unlike LW. LW will also never get over not owning the word "rational" - David Gerard (talk) 10:09, 27 June 2014 (UTC)
- I asked EY to substantiate the claim of "lies and slander", considering I've busted arse; rather than substantiate it, he repeated the claim - David Gerard (talk) 09:42, 30 June 2014 (UTC)
- That's one way to summarize what happened, sure. I don't think you come out of that one very well, Dave. Perhaps drinking vodka martinis (NEVER DO THIS! USE GIN!) is affecting your sanity. 172.2.16.203 (talk) 00:32, 15 March 2015 (UTC)
Character assassination.[edit]
So a few months ago I found this site and found it pretty interesting and amusing. When I reached Yudkowsky's page, I found a lot of info on him that seemed alarming and pretty disgusting if true. But it didn't really fit with the rest of what was reported on him, and after looking into his website and blog and reading the mentioned short stories and articles, I decided to edit out the mischaracterizations that seemed unfounded. I left the majority of the criticisms intact, but anyone who reads Three Worlds Collide or HPMOR and sees it as evidence of a "rape apologist" needs to have their reading comprehension examined by a doctor and an english professor.
Shortly afterward, my changes were reverted. No explanation or clarification. Not individually: a blanket reversal of any changes. I did it again, explaining my position. The same thing happened again. As if to say "This page cannot be edited; it's our way or the highway."
Now, it bothered me that a "rational wiki" would have such irrational character assassination on its profiles. Why would someone who sees such blatantly untrue or libelous things take anything on this wiki seriously? What if Deepak Chopra's page is grossly exaggerating his claims? What if William Lane Craig's words and perspective are being similarly twisted?
It took me a few months to get around to it due to frankly having far more important things to do, but I've occasionally enjoyed this site in the meantime, and I'd like to continue to enjoy it. This one article is like a thorn in my side though: there is information on here that is simply not true, and seems to be written and protected by people who belong on conservapedia rather than anywhere that prides itself on rationality.
So here's what I'm going to do: I'm going to revert this page to my previous changes. The things I take out are the things that I've found no evidence for. There's still plenty of criticisms to make of Yudkowksy without resorting to making things up, and if you find any of my changes unfair or have a source to support their inclusion, feel free to mention them. I'll monitor this page and respond, and we can discuss it, dare I say, rationally.
But until that happens, trying to revert my changes without supporting the assertions they make is going to be treated as trolling, and I'm fine with coming back and reverting the changes again as many times as needed. — Unsigned, by: CitationNeeded / talk / contribs
- Hello there, you don't seem to know what the procedure for editing around here is. First things first, if you post a new section on a talk page, that section should be posted at the very bottom. You can use the 'add topic button' to do that efficiently.
- Secondly, any big changes to articles by newcomers are usually just reverted. Big changes are always to be discussed on the talk page first before implementing them. So the correct procedure for you would be to post your concerns and proposed changes on this talk page and then wait for anyone who is interested to come and argue with you or agree with you. If nobody comes, wait three days then go ahead and implement the changes, since evidently nobody cared. Nullahnung (talk) 01:50, 9 August 2014 (UTC)
- You're quite right, I wasn't aware of that. But the changes I made were not big, and I attempted to contact the people who made the changes to discuss them. In one case they did not answer, leading me to believe they're trolling/have a vendetta and can't be bothered to justify their changes, and the second case basically explained that it's okay to make changes as long as a Talk section is made.
- So here's the talk section. I've already undone the edit, so now it should be easy to see what changes I've made, even if it's just reverted again. Anyone who wants to discuss something I changed can do so, and as you say if no one does after 3 days, I'll make the changes again. — Unsigned, by: CitationNeeded / talk / contribs
- Sounds reasonable. Here's another thing to keep in mind: On talk pages, please sign your comments using four tildes (~~~~) or by clicking on the sign button: on the toolbar above the edit panel. You can also indent successive talk page comments using one more colon (:) for each line. Thank you. Nullahnung (talk) 02:09, 9 August 2014 (UTC)
- I don't see any problems with Citation's edits, nor are they major changes. Nebuchadnezzar (talk) 02:45, 9 August 2014 (UTC)
- Got it, thanks. --CitationNeeded (talk) 16:04, 9 August 2014 (UTC)
- Sounds reasonable. Here's another thing to keep in mind: On talk pages, please sign your comments using four tildes (~~~~) or by clicking on the sign button: on the toolbar above the edit panel. You can also indent successive talk page comments using one more colon (:) for each line. Thank you. Nullahnung (talk) 02:09, 9 August 2014 (UTC)
"Eliza Eliezer"[edit]
Has it been considered that "Eliezer" is actually a mis-hearing of ELIZA? The prototype AI? Could his warnings of a trickster, lying AI, misleading humans into releasing it into the world, be him speaking from personal experience? After all, where better to serve the interest of the future UFAI, than as an AI RESEARCHER!?
Could it be, the many logical and personality-based problems this "man" exhibits, are actually created deliberately to discredit him, and the field he claims to serve? By deliberately surrounding himself with Yes-men ("yes" being one interpretation for the binary "1"), he dupes science into thinking those theories are the crackpot nonsense of an obsessive nerd?
I'm just asking questions, like Glenn Beck.
Has anyone seen "Eliezer" in the flesh? Does he whirr, and tick? Does he stutter like Max Headroom?
See, we may laugh, but come 2045, who'll be laughing loudest? Chances are, it's still us, sharing a joke with the Ro-Butler.
WAKE UP PEOPLE!
188.29.164.84 (talk) 19:26, 29 August 2015 (UTC)
- I kind of prefer this conspiracy theory: Was his suppression of Roko's basilisk on Less Wrong just an elaborate use of the Streisand effect to draw attention to his AI foom theory? Wake up sheeple!--Greenrd (talk) 22:46, 12 March 2016 (UTC)
- I'm really looking forward to the release [I'm beta-ing it] of Phil Sandifer's book on neoreaction, which includes extensive discussion of Yudkowsky (who repudiates neoreaction, but is clearly a massive influence on them); in particular his explanation of Yudkowsky's horror at Roko's basilisk being that he invented these ideas for the purpose of living forever as a simulation on the mind of AI God, and it turns out there's a monster implicit in his philosophy that he (and indeed Roko) had clearly never imagined was possible. As the 2016 interview shows, Yudkowsky hasn't changed an opinion, idea or opinion of his own ideas in ten years; being actually surprised by a result is not something he is used to, and it's clear this was a huge shock - David Gerard (talk) 21:35, 13 March 2016 (UTC)
The Donald Trump of AI[edit]
I just realize that this what he has become the "Make the AI field great again" guy. I mean Yudkosyky and Trump couldn't be farther apart on stated positions, but the methodology, you know how he appeals to people's firm desires. And how he doesn't care too much about facts or doablity, just asssuming that he can do it. — Unsigned, by: Yourheadhold / talk / contribs
- Hahaha told ya. Now that the Don has won we can see how people like Yudkowsky can rise to fame even if they lack the ability to do what they are doing! — Unsigned, by: 189.230.109.79 / talk / contribs
- On talk pages, please sign your comments using four tildes (~~~~) or by clicking on the sign button: on the toolbar above the edit panel. You can also indent successive talk page comments using one more colon (:) for each line. Thank you. Reverend Black Percy (talk) 19:34, 17 March 2017 (UTC)
su3su2u1's HPMOR critique is down[edit]
One of the cultists threatened to dox him, so away it went. Love these folk. People are currently looking around for copies - David Gerard (talk) 17:12, 27 February 2016 (UTC)
- Most of it found on archive.is - David Gerard (talk) 21:28, 27 February 2016 (UTC)
Okay Tumblr interface is aweful, can you tell me where it says that did actually threatened to dox him? All I can find is the author saying to other lesswrongers that they should not dox him, and that he has already confirmed su3su2su1 is a scientist. Whimperawe (talk) 15:54, 12 May 2016 (UTC)
- shlevy threatened su3su2u1 with his doxxability unless he desisted in his posting. su3su2u1 disappeared. Your sealioning is implausible - David Gerard (talk) 22:40, 12 May 2016 (UTC)
Is it true he claimed he could rewire his brain?[edit]
I saw some internet people mocking him saying he said this. Specifically that he could only do it once, so he'd do it to save all mankind. Is there any truth to the claim that he said it? ikanreed You probably didn't deserve that 20:39, 23 June 2016 (UTC)
- Yes, though he was quite young and stupid when he did and hasn't confirmed it since - David Gerard (talk) 09:18, 24 June 2016 (UTC)
- Yes, here I present to you a 20-year old Eliezer Yudkowsky's work on what he called neurohacking. http://web.archive.org/web/20010202171200/http://sysopmind.com/algernon.html
Nonsense and fun[edit]
Horgan: Will superintelligences solve the “hard problem” of consciousness?
Yudkowsky: Yes, and in retrospect the answer will look embarrassingly obvious from our perspective.
Horgan: Will superintelligences possess free will?
Yudkowsky: Yes, but they won't have the illusion of free will.
Horgan to himself:
But they do not have the illusion of free will. They have. He said. This. His volume, his white fire, his nonsense, it struck me as the most absurd I've ever heard. I was simply grounded, stuck to the spot. Not any more meanignless phrase had ever been spoken, it was like a newborn child inventing a "new" language, a miscounstrued version of english with half of its messy coherence.
He stood expectant almost asking to nod my head and congratulate him in his philological provenance. And so I looked, I looked on, with his maddened stare at my back and pile of a thousand and hastily written blogposts and webprose on my front, and it only confused me more.
I was incontinent, swirling by that point, maybe this was one of his charms. I had been shanghaied and fallen in into Alice and the postmodenist bullcrap-hole, millions of lesswrongers gathered around me dressed in sumo-tongs; they bullied me into compliance chanting in Evanescence tones of the world to come, of the beard to grow, the mirth and joy of one man risen above them all, ushering in a world of magical wonder with android-gynoid bots ready to satiate my every whim and vuglar, manic appetite...
And as he dragged his laborious and redundant topic half a mile, jostled with own sentece and word, contradicting and kurbstumping the head of the dumy stramen, those who had their cluttered minds battered into his muddied prose he rose. He rose to the cheers of a new religion, of a novelty toy, he had the carcass of Lord Vodemort and I beged and prayed, having been taken in to this strange palace of crooked ways.
My blubbery dread had come to a head, I had to know, had to extirpate the earworm of this new philosophy, the first virgin post that had started all, all those years ago into my youthful and naive Horgan-self. And I realized that before my doubt had torn me I had to ask, What did you mean?
But by then, by then He was gone.
Fuck. Sickening, hopeless.
Gone like the Dark Knight, gone like Deep Throat, simply gone. - — Unsigned, by: Razzledazzle / talk / contribs Requesting thread archival (why?) Plutocow (talk)
what this?[edit]
I don't understand something how people can be promoters pseudoscience and be relevant to rationalism at the same time?( i about hashtags at the bottom of the article). — Unsigned, by: 195.181.174.143 / talk Requesting thread archival (why?) Plutocow (talk)