Talk:LessWrong/Archive8
Maybe it's more appropriately described as a sect rather than cult?[edit]
I think we (me) may have overreacted calling it a cult. Sect would be more in line with how they behave, especially when it comes to word misuse and jargon and weird beliefs. The sects always believe that they have dramatically better clue than everyone else; the religious sects believe they are the only ones truly trying to follow the word of God, and the 'rationalist' sect believes they are the ones most 'dedicated to improving the art of human rationality'. The belief that gravely impairs ability to improve anything, as they started off replacing few select things in a semi-finely tuned system of ad-hoc hacks which is the human reasoning, and doing so itself impaired reasoning that is actually done in given time on given hardware, in a self reinforcing loop. Doing the best in given time on given hardware on a task you can't solve exactly is never neat. It's my job making heuristics and I even did some online programming contest with a toy image classification task, which I think involved Bayes in at least a few derivations (a very minor detail that I don't remember). Dmytry (talk) 12:39, 8 July 2012 (UTC)
- A sect? what exactly do you think a sect is? and what would this site be a sect of?GodotStop the damn screeds! 15:59, 8 July 2012 (UTC)
- http://en.wikipedia.org/wiki/Sect . It's a bit broader now than just 'splinter from religious group'. A sect of singularitarianism mixed up with elements of Eliezer diversifying from being AI crackpot into also being a self improvement guru self proclaimed improver of the art of human rationality. Dmytry (talk) 08:33, 9 July 2012 (UTC)
- Aren't sects groups within religions? For instance, Catholicism and Protestantism are sects of Christianity, ditto with Sunni and Shia for Islam. If we look at Novella's take on cults, then LW has some similarities, but they are not overwhelming. I'd rather describe it as an Internet crank (yes really, have a look at any crank index and see how much the Yudkowskist enterprise leaps out at you) with a fanbase who for some to me unknown reason puts a huge stock that he is the smartest (or as they'll like it: most RATIONAL) guy in the universe, and proceed from there. I think many of the problems with LW stem from how much weigth they put on Yudkowsky's words and how little weigth mainstream science carries unless it is approved of by Yudkowsky. Want to learn about any of the subjects Yudkowsky is supposedly an expert on (which seems to be almost everything in the LW universe)? Pick up a basic university textbook on it.--Baloney Detection (talk) 21:53, 9 July 2012 (UTC)
- The wikipedia says "A sect is a group with distinctive religious, political or philosophical beliefs.". Philosophical and political that is. Also, EY is just part of the puzzle. Rest of the puzzle is why the hell he's taken seriously, and that imo is the same phenomenon why other such crap (NXIVM for instance) is taken seriously. Arrogance for sale - praise prophet, do no effort 'training', feel being above everyone else. Dmytry (talk) 11:33, 11 July 2012 (UTC)
- I just looked up NXIVM. Oh wow. Please write an article on them - David Gerard (talk) 13:26, 11 July 2012 (UTC)
- They are not running a big online discussion board so I haven't interacted with any people there directly. Basically, NXIVM is the creation of older, smarter, more successful, and perhaps more cynical version of EY. The founder scored ultra highly on (a rather dubious) IQ test, and I would guess genuinely believes that he's sharing deep, highly useful wisdom with the people who are joining it (I have trouble imagining a scenario where he wouldn't allow himself to believe a thing which is so nice to believe, and I can't see how anyone or anything would convince him otherwise). Dmytry (talk) 20:51, 12 July 2012 (UTC)
- I see. As for why he is taken seriously, my impression is that too many people look at his claim of being "rational" and proceed from there. Traditional woomeisters tend to openly dismiss science and reason and are thus easily spotted.--Baloney Detection (talk) 12:16, 14 July 2012 (UTC)
- I just looked up NXIVM. Oh wow. Please write an article on them - David Gerard (talk) 13:26, 11 July 2012 (UTC)
- The wikipedia says "A sect is a group with distinctive religious, political or philosophical beliefs.". Philosophical and political that is. Also, EY is just part of the puzzle. Rest of the puzzle is why the hell he's taken seriously, and that imo is the same phenomenon why other such crap (NXIVM for instance) is taken seriously. Arrogance for sale - praise prophet, do no effort 'training', feel being above everyone else. Dmytry (talk) 11:33, 11 July 2012 (UTC)
- Aren't sects groups within religions? For instance, Catholicism and Protestantism are sects of Christianity, ditto with Sunni and Shia for Islam. If we look at Novella's take on cults, then LW has some similarities, but they are not overwhelming. I'd rather describe it as an Internet crank (yes really, have a look at any crank index and see how much the Yudkowskist enterprise leaps out at you) with a fanbase who for some to me unknown reason puts a huge stock that he is the smartest (or as they'll like it: most RATIONAL) guy in the universe, and proceed from there. I think many of the problems with LW stem from how much weigth they put on Yudkowsky's words and how little weigth mainstream science carries unless it is approved of by Yudkowsky. Want to learn about any of the subjects Yudkowsky is supposedly an expert on (which seems to be almost everything in the LW universe)? Pick up a basic university textbook on it.--Baloney Detection (talk) 21:53, 9 July 2012 (UTC)
- http://en.wikipedia.org/wiki/Sect . It's a bit broader now than just 'splinter from religious group'. A sect of singularitarianism mixed up with elements of Eliezer diversifying from being AI crackpot into also being a self improvement guru self proclaimed improver of the art of human rationality. Dmytry (talk) 08:33, 9 July 2012 (UTC)
Try tabooing 'sect' and 'cult'--Ceilingcat (talk) 18:00, 9 August 2012 (UTC)
How LW works[edit]
http://lesswrong.com/r/discussion/lw/dxr/epiphany_addiction/ - David Gerard (talk) 19:23, 3 August 2012 (UTC)
- I don't quite get it. Are you trying to make them realize it's mostly a balloon? Or is there something else?--Baloney Detection (talk) 22:09, 3 August 2012 (UTC)
The awesomeness of Yudkowsky according to the SI[edit]
At the Singularity Institute's site, they write about Yudkowsky:
"Eliezer Yudkowsky is the foremost researcher on Friendly AI and recursive self-improvement."
Oh really?--Baloney Detection (talk) 08:30, 8 August 2012 (UTC)
- Well, no-one in AI cares except SIAI/LW, and he's leading the charge there - David Gerard (talk) 09:22, 8 August 2012 (UTC)
- But then isn't it like if I proclaimed myself to be the foremost expert of thoriology (the theology of Thor)? Or why not the world's foremost SimCity player? That is, completely meaningless. The discussion on Yudkowsky's Wikipedia entry is, well, enlightening. Though it's freaking old (2005), so I don't know how accurate it is now.--Baloney Detection (talk) 13:52, 12 August 2012 (UTC)
- Well actually in AI there is basically work on the 'safety' aspects, like trying to prove stuff about AIXI, its just that they don't sell it with doomsday bullshit and the general promises of hell and salvation. EY does, and tries to sell himself as originator of the whole field. It is also painfully obvious that while they claim some ridiculous expected lives saved/$ (and while in LW they blatantly use a voter bot to create impression that this is an established concept that you must only donate to the top EV charity) , none of them actually believes the slightest bit of it (or have it so compartmentalized that it never affects actual deeds). Dmytry (talk) 10:59, 8 August 2012 (UTC)
- Friendly...? Like, user-friendly? Or damage-minimizing/human avoidance in task completion? I have a hard time pairing the normal definition of 'friendly' with 'AI.' Not because they are inherently unfriendly, but because where we are now, and even projected into the future, autonomy means 'can do the stuff we tell it to do on its own,' rather than 'does the stuff we tell it to because we're remotely managing it ourselves.' There's just no decision to be 'friendly' or not, yet. Nor is there any definition for 'self-improvement' (of... AI? AI writing its own code?) to speak of that means what I assume it means. I mean, I am not an expert, and I could totally not be aware of new developments in AI software, but some of the biggest applications for autonomy right now do not have to do with the unit storing unrelated algorithms or data or becoming 'broader', but being more specific. For example, landing an aircraft in adverse conditions, such as on a moving platform like a boat, or in a storm: the autonomy protocols would switch on and JUST handle the landing guidance. The first instance of autonomy we'll probably see in mainstream use (aside the roomba and stuff) are expensive cars that parallel-park themselves. And then that would slowly trickle down to normal cars as the technology becomes more widely available and affordable, like all other features of normal cars we take for granted today. ±KnightOfTL;DRwalls of text while-u-wait 11:33, 8 August 2012 (UTC)
- No, you're not missing anything here. The "research" on FAI is more applicable to a Terminator movie script than reality. Nebuchadnezzar (talk) 11:49, 8 August 2012 (UTC)
- Hey don't knock down the Terminator movie script. At least the software not engineered to self preserve had to become self aware to self preserve. And to make itself come around it had to send a terminator back in time to go berserk, not just sit and hope someone did go basilisk on some nerds. Script writers at least do think through the important details. Dmytry (talk) 13:46, 8 August 2012 (UTC)
- No, you're not missing anything here. The "research" on FAI is more applicable to a Terminator movie script than reality. Nebuchadnezzar (talk) 11:49, 8 August 2012 (UTC)
This reminded me one time I linked on LW the video where Eliezer gets announced as foremost expert in friendly AI and recursive self improvement and said that the latter part normally implies making something that actually self improves. The answer was that obviously the announcer was not informed or something, not their fault. Well, yeach, lol. There was also a recent accident thats also a fair bit funny. Dmytry (talk) 22:16, 10 August 2012 (UTC)
It's not blatant racism, it's "human biodiversity"[edit]
Swarming here. (Because liberals.) Also, VDARE is not racist, apparently. Can't wait for the CEV these folk come up with - David Gerard (talk) 11:37, 11 August 2012 (UTC)
- The more I read of this, the more I realise that outside the few posters who know what they're talking about, a substantial amount of upvoting has nothing to do with how intelligent the post is and more to do with how many times you can fit in the local buzzwords into a reasonably small space. sshole 13:09, 11 August 2012 (UTC)
- Lulz. Well I think we have a cluster of viewpoints. Funny that they mention this Gwern's study. I'm curious if he did other studies and found some way of techno disruption that would be effective, such as shooting a research team with a machine gun. I'd be quite surprised if he did not. The AI stuff, the belief that scientists will kill everyone by accident without Our Well Chosen Prophet's improvements to 3 laws of robotics, and distrust of mind uploads is probably just a part of general xenophobic/narcissistic mindset, as it can't really be much else. Dmytry (talk) 13:45, 11 August 2012 (UTC)
- That's not just any racialist crank he links to, that's Richard Lynn, who wrote such classics as Dysgenics: Genetic Deterioration in Modern Populations and IQ and the Wealth of Nations. Nebuchadnezzar (talk) 17:09, 11 August 2012 (UTC)
- ahh, IQ-mongering guy. That's quite fitting, given how much their self esteem relies on IQ test results and the like. Dmytry (talk) 18:20, 11 August 2012 (UTC)
- "ahh, IQ-mongering guy." That's an understatement. More like eugenics-mongering guy. The word "dysgenics" was coined by Francis Galton to refer to a decline in the genetic "stock" of a population -- it's basically eugenics-speak for "Oh noez, the poor brown people are outbreeding us!!111!" Lynn is also one of the well-known beneficiaries of cash-money from the eugenicist Pioneer Fund, which keeps the last remaining batch of eugenicists and racialists in science afloat. Nebuchadnezzar (talk) 18:36, 11 August 2012 (UTC)
- ahh, IQ-mongering guy. That's quite fitting, given how much their self esteem relies on IQ test results and the like. Dmytry (talk) 18:20, 11 August 2012 (UTC)
- That's not just any racialist crank he links to, that's Richard Lynn, who wrote such classics as Dysgenics: Genetic Deterioration in Modern Populations and IQ and the Wealth of Nations. Nebuchadnezzar (talk) 17:09, 11 August 2012 (UTC)
- That's pretty much precisely how to get upvotes: throw around the local jargon fluently. (The really easy way to get upvotes is to comment in a Harry Potter thread soon after a new one's up.) Got me >7000 karma almost entirely from comments - David Gerard (talk) 23:30, 11 August 2012 (UTC)
- In all fairness, abusiveness towards SI gets upvoted too unless you're on shit list of The Internet Police which is about 3..4 members strong (or less) and which concerns itself with profitability of the scam by doing shit like simulating local consensus with 'donate only to top EV charity' idea and possibly keeping out anyone who makes good counter arguments. Seriously guys, SI is basically a scam, a way for EY and Luke to have fun and feel important while doing nothing of interest, and LW is a scam-supporting forum complete with all the features of a scam-supporting forum. If someone asks for money online, it's probably a fraud, holds. If someone asks for money for a cause and themselves consistently resolve any tradeoffs between what helps their supposed cause and what they seem to feel like doing in favour of the latter, that's certainly a fraud/scam. Proceed from there with expectations as of what you would expect those 'vote' numbers to actually mean on the site that exists solely to provide money to the scam. Dmytry (talk) 06:12, 12 August 2012 (UTC)
- Actually I think Luke Muehlhauser honestly believes in it. The guy is the most efficient Internet writer I have ever seen. If he decides to devote himself to some cause, he will work the hell out of himself for it. Though you are right that LW exists to feed the SIAI, by their own admission.--Baloney Detection (talk) 13:58, 12 August 2012 (UTC)
- I came to conclusion that the correct model is unprincipled kiddies having fun. They don't do unfun bits of defrauding optimally, but neither do they do unfun bits of worrying about the risk. Consider Muehlhauser-Wang dialogue, untroubled by the slightest trace of worry that he might annoy an editor of important journal. They don't resolve tradeoffs between doing what ever they feel like doing and what ever helps their cause, in favour of the cause. The technical arguments are only a distraction. Basically, they believe they are saving the world (which is fun) but they do not believe there is the risk. Just playing a fun game. Dmytry (talk) 15:10, 12 August 2012 (UTC)
- Actually I think Luke Muehlhauser honestly believes in it. The guy is the most efficient Internet writer I have ever seen. If he decides to devote himself to some cause, he will work the hell out of himself for it. Though you are right that LW exists to feed the SIAI, by their own admission.--Baloney Detection (talk) 13:58, 12 August 2012 (UTC)
- In all fairness, abusiveness towards SI gets upvoted too unless you're on shit list of The Internet Police which is about 3..4 members strong (or less) and which concerns itself with profitability of the scam by doing shit like simulating local consensus with 'donate only to top EV charity' idea and possibly keeping out anyone who makes good counter arguments. Seriously guys, SI is basically a scam, a way for EY and Luke to have fun and feel important while doing nothing of interest, and LW is a scam-supporting forum complete with all the features of a scam-supporting forum. If someone asks for money online, it's probably a fraud, holds. If someone asks for money for a cause and themselves consistently resolve any tradeoffs between what helps their supposed cause and what they seem to feel like doing in favour of the latter, that's certainly a fraud/scam. Proceed from there with expectations as of what you would expect those 'vote' numbers to actually mean on the site that exists solely to provide money to the scam. Dmytry (talk) 06:12, 12 August 2012 (UTC)
- Lulz. Well I think we have a cluster of viewpoints. Funny that they mention this Gwern's study. I'm curious if he did other studies and found some way of techno disruption that would be effective, such as shooting a research team with a machine gun. I'd be quite surprised if he did not. The AI stuff, the belief that scientists will kill everyone by accident without Our Well Chosen Prophet's improvements to 3 laws of robotics, and distrust of mind uploads is probably just a part of general xenophobic/narcissistic mindset, as it can't really be much else. Dmytry (talk) 13:45, 11 August 2012 (UTC)
- With regards to upvoting for using local jargon, it seems to me that another cause for upvoting is to squeeze links to LW writings in your posts, for example complexity of values, politics is the mindkiller and the rest of it. Oh, and btw Kruel's list of Yudkowsky quotes is back!--Baloney Detection (talk) 13:47, 12 August 2012 (UTC)
- Oh yeah, link the jargon as well as using it - David Gerard (talk) 14:36, 12 August 2012 (UTC)
- If I made a forum to feed a fraudulent charity, I could make a bot (or have volunteer team of more or less true believers) upvote for jargon use at start. Reddit did at start manipulate votes also, to set standards on what's up and down voted. edit: that's it, if I thought my jargon was a great idea. Dmytry (talk) 15:24, 12 August 2012 (UTC)
- It was seeded with the libertarian transhumanists from EY's old SL4 mailing list; a lot of the same people writing the same way. (And convinced that EY was the brightest guy they'd ever heard of.) Although Reddit seeded with sockpuppets, they didn't set up an entire years-long mailing list to give the sockpuppets a backstory - David Gerard (talk) 16:03, 12 August 2012 (UTC)
- Ghmm. Good point on SL4. Still, keep in mind that the LW community is hundreds of people, while you only need 10 socks to almost completely control votes (people conform). The SL4 is a great source for Eliezer quotes btw. My favourites: this and this. edit: i.e. my point is, you can't very well deduce what the community is like from 'how they vote'. I expect that vast majority rarely votes, while a small minority votes on everything, 'strategically' also. I would think that jargon getting upvoted is a natural phenomenon, but at the same time, you can have say 10 obsessive jargon lovers plus rest of the people seeing it as it must be some awesome insight when its at +10. Dmytry (talk) 18:07, 12 August 2012 (UTC)
- It was seeded with the libertarian transhumanists from EY's old SL4 mailing list; a lot of the same people writing the same way. (And convinced that EY was the brightest guy they'd ever heard of.) Although Reddit seeded with sockpuppets, they didn't set up an entire years-long mailing list to give the sockpuppets a backstory - David Gerard (talk) 16:03, 12 August 2012 (UTC)
- If I made a forum to feed a fraudulent charity, I could make a bot (or have volunteer team of more or less true believers) upvote for jargon use at start. Reddit did at start manipulate votes also, to set standards on what's up and down voted. edit: that's it, if I thought my jargon was a great idea. Dmytry (talk) 15:24, 12 August 2012 (UTC)
- Oh yeah, link the jargon as well as using it - David Gerard (talk) 14:36, 12 August 2012 (UTC)
LessWrong in the news[edit]
Found it via kruel.co, here: http://betabeat.com/2012/07/singularity-institute-less-wrong-peter-thiel-eliezer-yudkowsky-ray-kurzweil-harry-potter-methods-of-rationality/?show=all
The article indicates to me that "rationality" (Yudkowsky's version of it, at least) is more of a marketing technique rather than what LW is actually about. It says that the NY chapter of LW meetups (their biggest one, I think) is mostly focused on futurism/singularitarianism despite labelling itself otherwise. Secondly, they state openly that they try to hide some of their beliefs:
"Michael Vassar, the former president of Singularity Institute, who stepped down in January to pursue his idea for a personalized medicine startup–later bringing on Mr. Mowshowitz and Ms. Vance–admitted the nonprofit had learned to hide some of its more radical ideas, emphasizing rationality instead.
As Mr. Yudkowsky put it, “There are plenty of people out there who would be interested in cognitive science-based thinking skills who wouldn’t necessarily buy into the whole ‘save humanity’ thing.”"
The basilisk also gets coverage.--Baloney Detection (talk) 10:24, 28 July 2012 (UTC)
- A surprisingly well sourced article. The LW's reaction is pretty funny (they have a discussion thread). It's rather shitty though that there isn't a group for atheists that gives the cuddle and illusory superiority without skynet crap and without talking people into leaving school (and not studying outside school either, just going straight to philosophize about AI). Dmytry (talk) 17:40, 28 July 2012 (UTC)
- Look at meetup.com, there are plenty of meetups for atheists, skeptics and the like. Far from all atheists are into LW futurism/singularitarianism/"rationality". I think most atheists have never heard of LW.--Baloney Detection (talk) 12:20, 29 July 2012 (UTC)
- From experience, some of the smaller LW meetup groups are sort of like what Dmytry described. The one in my city focused on life-hacking and discussion of cognitive biases; transhumanism/Singularity topics were referenced in passing but never really discussed. (Alas, we don't really have formal meetings anymore.) I was really surprised at the description of the New York LW group, I had no idea it is so transhumanist and has those sorts of social norms. Tetronian you're clueless 20:50, 30 July 2012 (UTC)
- Well is the "rationality" stuff is a marketing ploy (which by their own admission it is), then it makes sense for LW meetup groups arising spontaneously, or without little direct input from the core members, to be focused on that, whereas those closer in touch with the core of the site are more transhumanist in nature. At least that's my guess. EY wrote his epistle to the NY group after visiting them.--Baloney Detection (talk) 20:47, 31 July 2012 (UTC)
- From experience, some of the smaller LW meetup groups are sort of like what Dmytry described. The one in my city focused on life-hacking and discussion of cognitive biases; transhumanism/Singularity topics were referenced in passing but never really discussed. (Alas, we don't really have formal meetings anymore.) I was really surprised at the description of the New York LW group, I had no idea it is so transhumanist and has those sorts of social norms. Tetronian you're clueless 20:50, 30 July 2012 (UTC)
- Look at meetup.com, there are plenty of meetups for atheists, skeptics and the like. Far from all atheists are into LW futurism/singularitarianism/"rationality". I think most atheists have never heard of LW.--Baloney Detection (talk) 12:20, 29 July 2012 (UTC)
- This is hilarious. Even the SIAI employees in the thread are making fun of EY over this one - David Gerard (talk) 03:04, 1 September 2012 (UTC)
- Oh wow. That wouldn't be much different if it was about furries. --83.84.137.22 (talk) 06:40, 1 September 2012 (UTC)