Talk:LessWrong/Archive15

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 3 May 2016. Please do not make edits to this page.
Archives for this talk page: , (new)(back)

Another forum discovers LW[edit]

They react in the usual way, i.e. not all that well. I found this comment sums up a lot of my problems with the LW memplex, though - David Gerard (talk) 07:48, 18 January 2013 (UTC)

I don't suppose that anybody wants to try out Forum:LessWrong? That way this page can stick to the article. Hell, WIGO:LW would probably be a really good idea.--"Shut up, Brx." 07:49, 18 January 2013 (UTC)
I'm don't especially like the idea of another site-specific wigo. Wigo:CP is fading, Wigo:Citizendium is mostly dead, and Wigo:4R is long dead. Wigo:Clog, Wigo:Blog, and Wigo:World are good ideas because they not only share posts and articles of interest with a wide range of users, but also because they're free to draw from a diverse base of material (the entire wingnutosphere, the entire blogosphere, or anything in the news). A Wigo:LW would not only primarily appeal to a relatively small group of people, but its source material would be extremely limited in scope. Such a well would dry up quickly. Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 08:11, 18 January 2013 (UTC)

So, anyone knows more about the California enclave?[edit]

I've been reading they even cohabit in houses, “I’d certainly say that we don’t think ‘poly’ is morally wrong or anything,” she noted, adding that the California contingent had taken the idea quite a bit further. “In one of those [co-]houses, I saw a big white board on the board with a ‘poly-graph,’ a big diagram of who was connected to whom,” she said. “It was a pretty big graph. , from http://betabeat.com/2012/07/singularity-institute-less-wrong-peter-thiel-eliezer-yudkowsky-ray-kurzweil-harry-potter-methods-of-rationality/?show=all . If we ignore for a moment that they brand themselves rationalists and re-frame theology as futurology, we get a fairly ordinary doomsday cult, with the most unusual thing being that it has rather popular online board. It seems to me that it would be most interesting to have some info about the actual communities in the article, not just about the internet forum. The article is written as if it is not online it doesn't exist. Dmytry (talk) 10:31, 31 December 2012 (UTC) another source: http://lesswrong.com/lw/atm/cult_impressions_of_less_wrongsingularity/60xl Dmytry (talk) 10:35, 31 December 2012 (UTC)

You seem to know more about the community than most. Perhaps you should write the section. Dump it here for review if you think it needs it, although it probably won't.--Brendiggg (talk) 12:11, 31 December 2012 (UTC)
I have no idea what they do beyond what they post about on their site... I'm hoping someone knows someone who seen it up close. I'll write something on the rituals though. The new one where the rationality lets them see the horrors and with quotes of the H.P.Lovecraft and so on is quite illuminating. I guess we won't have much info until they grow enough to have someone break out with juicy inner details. Dmytry (talk) 13:37, 31 December 2012 (UTC)
I wrote a pretty all encompassing article describing what happened. I'm not sure what else you were looking for. - Raemon
None of those lifestyle elements are at all outlandish thereabouts, and in no way whatsoever are they LW/SIAI-specific - David Gerard (talk) 21:13, 31 December 2012 (UTC)
It's not about how rare it is. A cohabiting tight knit group of like-minded doomsday singularitarians would be dumber and crazier than any single member could be by him/her self alone. Dmytry (talk) 23:46, 31 December 2012 (UTC)
It seemed more like you were objecting to the rationalist free love thing they have going. Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 00:07, 1 January 2013 (UTC)
Hmm I should of trimmed the quote more. Nothing against it per se, except, of course if a doomsday group keeps the relationships inside, that wouldn't be very good for sanity. Speaking of which they see themselves as more sane (obviously - it's us who are insane with the denial of the extreme threat to mankind). Dmytry (talk) 00:52, 1 January 2013 (UTC)

I live in "the California enclave", if that's what you want to call the fact that a site that mostly appeals to programmers has a disproportionate number of people in Silicon Valley. The Singularity Institute and the Center for Applied Rationality are both in Berkeley, so a lot of the people there know each other through work and have formed a pretty tight-knit community. They invite the Less Wrongers in the area to their social events, people get to know each other, and yes, some people meet at those events and end up rooming together or dating. It doesn't seem overwhelmingly different than my college atheist club, where people who met at the club sometimes hung out together and sometimes ended up rooming together or dating. There are definitely a lot more people into open relationships here than anywhere else I've been, but it works surprisingly well and seems to be a pretty common thing in Berkeley even outside the LW community. I'm happy to answer any other questions you might have ("I live in an enclave, AMA"?) - Yvain

Do you co-habituate with some group of lesswrongers? Also, I don't get why you guys are so opposed to being called a cult. You got doomsday, got a god, rationalized afterlife, got an autodidact charismatic leader that constantly writes on topics he really doesn't understand (yes, really), and those shaky things are not in any way kept separate from "rationality" but instead are presented as "anyone who doesn't believe is an idiot" sort of things, by the leader at least. You can claim those beliefs were arrived at rationally, but if you dig a little, everyone will be saying "I got strong priors" which means "I just believe". Dmytry (talk) 09:13, 1 January 2013 (UTC)
Yes, I live with my girlfriend and one other person. The rent is $2000/month (the Bay Area is really expensive) so I'm glad they're around and willing to help pay. As for cults...hmmm...it would be easy to mistake environmentalism for a cult. Environmentalists believe in a potential doomsday (runaway global warming) and they often ask you to donate money to their organizations to prevent it. A lot of them live together in sustainable housing communities. They have their own dietary restrictions (organic), their charismatic environmentalist leaders (does Al Gore count as charismatic? He's getting better) and some of them even have a militant wing. But this doesn't mean we can immediately and unreflectively assume "Environmentalists are just a religious cult! Global warming is a LIE!", even though many people try to do exactly that. It just means they've had the bad luck to stumble into a real problem that sets off some people's "sounds like religious fanatics" detector. Just because you can sort of pattern-match some aspects of X to some aspects of Y doesn't mean you can assume X is like Y in every way. I've mentioned many times how much I dislike that style of argumentation before and I don't think you're doing it any favors here. Yes, we have some weird beliefs that an uncharitable interlocutor could pattern-match to other, less rational weird beliefs. But if you have problems with us, I'd rather you debate the actual problems (which tends to get welcomed on LW) than talk about what scary metaphors they sound like (which, when done ad nauseum without any substantive points, was what got you banned). Also, regarding people saying "I just believe" without having any real arguments - I'm almost sure your excellent article pointed out that the collection of arguments that led us to our beliefs has "an extraordinary length...surpassing J.R.R. Tolkien's Lord of the Rings" or some such. Have you read them yet? - Yvain
It seems to me that you got an universal counter-argument here. If you had neighbours that believed you are killing them with your mind rays, or that you're building a nuclear bomb in your basement which you were going to accidentally blow up on, would you feel at all threatened or would you really call that a worst argument in the world? By the way, I've read writings on AI danger, no technical substance here, and it's far end of the spectrum. There are global warming crackpots who believe in extreme claims entirely unsupported by science, and you could start a doomsday cult around global warming just fine. But okay, on the "real problems": belief in doomsday is obviously dangerous. When there's some really advanced problem solving software, some of these guys will freak out big time. Even without that some freak over something as dumb as basilisk. I don't see any reason why the evidence from other doomsday cults would be inapplicable to this. edit: damn it you edited your argument. Yes, votes netting positive for critique is a positive sign for sure, albeit it is far too easy to second-guess this sort of thing. Dmytry (talk) 20:06, 1 January 2013 (UTC)
Now you seem to be changing your argument from "they are a cult", which I do think is Worst Argument In The World, to "they might commit violence", which is a perfectly good empirical question. But I worry that "they might commit violence" can be used on almost any group and so it's unfair to use it on us specifically. Environmentalism might convince people to commit violence against companies that are "destroying the Earth" (as has in fact happened). Conservativism might convince people to commit violence against liberal politicians who are "destroying our country" (as has in fact happened). Heck, a guy recently got beaten up for being a conspiracy theorist - does that mean RationalWiki, which singles out conspiracy theorists and talks about what bad people they are, is at risk of "promoting violence"? By this argument, anyone who points out a potential danger, or has an opinion that anything is bad, is "promoting violence" and needs to be stopped. I think it's much saner to wait to condemn people until they *actually* promote violence in some way. I can't see a shred of evidence that Less Wrong or SI has done so, or that anyone *except you* is promoting this interpretation of AI risk. Eliezer has said both that he thinks Unfriendly AI is decades away and that no current team is even remotely a threat, and that he thinks ends don't justify means even in the case of saving the world. As far as I can tell we are about as low-risk for violence as it is possible to be while still actually having opinions. Also, can we stick to questions about living in California from now on as you have tried to have this debate with me many times and I don't really want to have it again? - Yvain
To be fair, "they might commit violence" is not an abstract concern, given one of the reasons Eliezer went all censorious on the idiot discussion of consequentialist reasons for killing tobacco company executives was that someone read that, thought it was actually a good idea and had to be talked out of it. (I can't find the post, there's ~500 long-winded responses already to the thread saying this shit wasn't on.) If you claim there are not nutters who will take LessWrong memes and add 2+2 to get 666, I will say you are demonstrably wrong. I see Gwern has edited his essay on slowing Moore's law because even more people took it as advocacy of mass sabotage of fabs. Responsibility for the implications of what one writes are a goddamn pain in the arse, because idiots reading what one writes are so very creatively stupid (and that one I've had people email me who I then had to tell "no, Gwern is not actually a terrorist, seriously") David Gerard (talk) 22:14, 1 January 2013 (UTC)
I didn't get to see the thread you're referring to, but "over five years, there was one person who mouthed off on the Internet" seems pretty uncompelling for a few reasons. First, I bet you could find the same discussion of the implications of consequentialism (does it suggest it's morally good to murder bad people?) in papers in academic philosophy journals, but we don't worry academic philosophers are going to go kill anyone. Second, although I know the mantra is "zero tolerance for anything that sounds violent", in practice we don't freak out when local college students mouth off about how we should start a violent revolution and smash the state, and this seems much closer to that category than to any category that results in actual crime. Third, as Eliezer pointed out, if anyone with half a brain were actually seriously thinking of committing a crime, the last thing they'd do is post about it on the Internet. So basically I think this is a double standard - if communists, environmentalists, academic philosophers, or anyone else were to have that discussion, it would be par for the course. Right now I think LW has done everything they can to discourage violence: Eliezer's spoken out against it, the moderators have banned discussion of it, et cetera. I don't really see what Dmytry et al want other than to completely close up shop and never talk about a potential problem because of the small chance it might move some lone crazy to an action no one endorses. - Yvain

This postimg is interesting in the above context. Note that it's from 2010 and about killing tobacco company employees. It was originally an example about destroying chip factories to slow down AI, like Gwern's example, but was changed after a suggestion in the comments from Yvain. And note this paean against compartmentalisation - an SIAI researcher strongly emphasising that you should put all your ideas together and see what the consequences are - David Gerard (talk) 21:04, 2 January 2013 (UTC)

Note that there's 2 writings by "Gwern", one where he says there's two ways to stop Moore's law and concludes one (sabotage) won't work, and other, where he got Goldman-Sachs as example target (and with details he found online about GS) edit: also, I don't want you to fire gwern or something because it would probably be worse than having him write such articles. Dmytry (talk) 03:02, 3 January 2013 (UTC)
Ohh, I skipped a post by Yvain in that thread David linkedimg: "I suppose the difference is whether you're doing the Intel attack now, or in a hypothetical future in which Intel is making brain simulators that seem likely to become AGI. As someone else mentioned, if we're talking about literally THEY ARE BUILDING SKYNET RIGHT NOW, then violence seems like the right idea." . Yea. That's the difference. I see. Dmytry (talk) 19:17, 5 January 2013 (UTC)
What exactly defines the "California enclave"? When I looked at Facebook quite a while back, the LW group had about 700-800 members or so, I suppose not all of them are from California. By judging from the data I could find RW actually got more visitors than LW (contrary to David Gerard's assertion that it's only read by its editors). So the enclave must be a really, really small group. It must be a really minor group even in the context of its area. I suppose that most people of Silicon Valley are not regular LWers (most have probably not even heard of that site). Besides, I thought the biggest LW meetup group was in NYC. Besides Yvain, I'd love to hear your thoughts on the basilisk, provided that Yudkowsky won't excommunicate you of course.--Baloney Detection (talk) 22:35, 8 January 2013 (UTC)
@Baloney - I (and most other LWers) don't find "the basilisk" nearly as interesting as people at RationalWiki seem to. It's basically a really clever re-imagining of Pascal's Wager. Pascal's Wager is kind of weak, but there are stronger versions (see "Pascal's Mugging") that are hard to pick apart logically. Nevertheless, most people have enough common sense not to take Pascal's Wager seriously even if they can't point to the exact logical flaws.
People who tragically lack common sense and compensate by making decisions based on pure reason (eg some Less Wrongers) are especially vulnerable to Pascal-type arguments. I know of a couple of people linked to the community who have actually converted to Christianity or Islam based on the Wager, and other people who haven't gone that far but are at genuinely bothered by it.
So coming up with a really clever re-imagining of Pascal's Wager targeted at exactly the community containing the people most vulnerable to being mentally screwed up by Pascalesque arguments is a dick move. It's especially a dick move if part of the argument is that only people who have read the argument are going to suffer the eternal torture. It's especially a dick move if you then immediately post it on the vulnerable community so everyone there can see how clever you are. Eliezer understandably got really angry at Roko and deleted the entire thing. Every so often there are vague rumors that someone actually took Roko's Wager seriously and got panicked by it, but it's always "a friend of a friend". Overall I think he's just angry that someone is deliberately spreading information designed to make people panicked and upset, the same way I might be angry if someone started waving posters of goatse around in a church. Yvain (talk) 13:36, 12 January 2013 (UTC)
@Yvain Regarding Eliezer Yudkowsky’s decisions on how to handle Roko’s Wager. Banning any discussion of an idea is known to spread it. But more importantly, it can give even more credence to an idea whose hazardous effect is in the first place a result of an unjustified stamp of credence.
If Eliezer Yudkowsky was really interested to protect gullible people from an irrational idea then he should go ahead and openly dismiss it as insane and possibly even dissolve the problem once and for all.
It is utterly irresponsible to try to protect people who are scared of ghosts and spirits by banning all discussions of how it is irrational to fear those ideas.
I believe that the real reason for his decision to ban all discussion of Roko's basilisk is rather that he is simply unable to disavow the idea without having his whole worldview come crashing down as a result or admit that the best he can do is to act based on intuition rather than pure reason or to instead go batshit insane and give in to some sort of Pascal's mugging. XiXiDu (talk) 10:10, 18 January 2013 (UTC)
One could argue that banning all discussions as if it was a credible threat that he believed in is taking the batshit insane route. Scarlet A.pngnarchist 15:44, 21 January 2013 (UTC)
If Andrea Rossi had a forum, and someone came up with a cold fusion nickel-in-water nuke on it, what would Rossi do? Dmytry (talk) 18:27, 3 February 2013 (UTC)

Two interesting pieces[edit]

Kruel has recently posted two interesting pieces, The Singularity Institute: How They Brainwash You and Why you should be wary of the Singularity Institute. I suggest the former should make it to the link section on the page. Any disagreement?--Baloney Detection (talk) 22:58, 11 January 2013 (UTC)

I'd prefer to digest the contents into the article, though with a considerably less panicked tone. Most LWers are not in fact dangerously insane, but there does seem to be something about it that (a) attracts the unbalanced and defective (b) gets them to take LW memes and add 2+2 to get 666. (That's an inchoate phrasing, it needs a better one.) - David Gerard (talk) 00:01, 12 January 2013 (UTC)
At the end of the day you have to be insane (or insanely rich) to donate any money to SI. Insane are the target audience for Yudkowsky. Even your go-to example of sanity, Yvain, a: supports violence if AI seems imminent, and b: comes here to lie and mislead with one in 5 years etc rhetoric. Hell, the guy himself came up with tobacco as a code-word for semiconductors. I can't see how much more proof can be needed. It is a bunch of dangerous nutjobs. Not dangerous as in they'll create an evil AI dangerous, but dangerous as in one day they'll freak out that skynet is imminent, and are going to get violent, or rather, one or two of them are going to get violent, being cheered by all the "nice" and "sane" guys like Yvain. Dmytry (talk) 08:08, 12 January 2013 (UTC)
"Even Yvain supports violence of AI seems imminent". No, I *might* support violence if *an obviously hostile unstoppable SKYNET-style AI seemed clearly imminent*. My guess is so would everyone else (seriously, wouldn't you in that situation?). If some guy is just working on ASIMO 2.0, I'm not going to suggest someone crazy take action. As for "code word", I was *making a reductio ad absurdum*. When one person, a long time ago, brought up the possibility of violence against AI manufacturers, I used violence against tobacco executives as a reductio of why that's not how society works. Could you do me a BIG FAVOR and every time you write "Yvain says..." or "Yvain believes..." in the future, follow it with "...according to my interpretation of him, which has been consistently wrong every time I've tried to use it before"? I am getting really tired of having to clean up after your constant malicious misinterpretations of me. Yvain (talk) 13:48, 12 January 2013 (UTC)
No, of course I'm not suggesting that you'll suggest crazy action when you think someone is working on asimo 2.0 . The problem is that you would suggest such action when, in your own words, "Intel is making brain simulators that seem likely to become AGI", or something of this kind which to your or Yudkowsky ignorant ass looks like an unstoppable SKYNET style AI. Dmytry (talk) 22:29, 12 January 2013 (UTC)

This talk page is becoming one of the central coordination points for LW/SI's critic/stalkers. Maybe *that* should be mentioned on the page too? Mporter (talk) 11:34, 12 January 2013 (UTC)

That is an interesting idea. Mostly, it's Dymtry and (slightly less crazily) Baloney Detection, though, so I hesitate to call it "central" or mention it in the article.--ADtalkModerator 11:49, 13 January 2013 (UTC)
In the same way as Citizendium, and for the same reason: nobody else cares. When the only people talking about you are RationalWiki, your fame is below the background noise level. Not quite as interested, though - I'm pretty sure we collectively couldn't be bothered maintaining a WIGO:LW - David Gerard (talk) 12:05, 13 January 2013 (UTC)
As an ex-LWer who left for RW after being disillusioned about their "cultish" transhumanism beliefs, I would be interested. As long as I'm not the only person maintaining it. - LucidFox (talk) 17:10, 13 January 2013 (UTC)
I don't think this would be productive. Few of the criticisms to be made of LW can be easily distilled into brief sentences, since the object of criticism tends to be so elaborate. Although it is tempting to do a WIGO about stuff like this... something like "LW writing assignment: rewrite the sentence 'bright lights help me stay awake' in the most complicated way possible."--ADtalkModerator 13:17, 21 January 2013 (UTC)
Interesting irony: A mere two years ago (December 2010) Kruel was calling rationalwiki "awful" and proclaimed LW the most intelligent and rational community he knew of. "I'm curious if you know of a more intelligent and rational community than Less Wrong? I don't" [1] And back then it was me who criticized him on his unjust judgment on rationalwiki. [2] Aris Katsaris (talk) 12:23, 21 January 2013 (UTC)
1.) LW is still the most rational and intelligent community I know of. 2.) I still agree with almost everything of what Yudkowsky wrote and which I have read and extrapolate to be in agreement with him about mostly everything 3.) The RationalWiki entry has improved since I wrote that comment. 4.) I now disagree with the part of my comment referring to the Roko basilisk. It should be mentioned explicitly. 5.) Points #1,2 do not contradict with any of my criticism of SI/LW/Yudkowsky.
Regarding reading the material before trying to defend it. First of all, yes, I haven't read most of the quantum sequence but I have read enough to be able to criticize the RW statement implying that it takes testable predictions to choose one interpretation over another. Secondly, I also gave a probability estimate of MWI being correct when asked during the last LW survey. Despite having almost no knowledge of the topic I can have an opinion based on the data available to me, including a weighing of third-party opinions. I don't see any problem here. XiXiDu (talk) 14:44, 21 January 2013 (UTC)
QM sequence can be summarized in 1 sentence: Yudkowsky doesn't understand Solomonoff induction, Kolmogorov complexity, or Quantum Mechanics, proceeds to see his confusion as a proof that physicists are stupid compared to him, is smug about it. The role of QM sequence and 'read the sequences': ensure that people who have a clue aren't joining and interfering. (No, cults don't work by evaporative cooling, cults work by attracting only cultists). Dmytry (talk) 18:19, 22 January 2013 (UTC)
You have about 20 upvoted articles in LessWrong (and a few downvoted one) and a quite decent karma balance (826 as of time of speaking). If this supposed "cult" of ours desires to be so exclusionary, why were even posts of yours critical of LessWrong (like [3]) getting upvoted? It's interesting to see how a person moves from somewhat reasoned opposition to specific elements of LessWrong to treating it like it's the worst thing in existence. Aris Katsaris (talk) 21:27, 22 January 2013 (UTC)
Back then I didn't happen to know SI claimed 8 lives per dollar, didn't know many of the donors were mentally unstable people that could be kept awake at night by basilisk, and didn't know they actually try to take away from other charities with their stuff about donating only to charity with largest 'expected utility' (with someone using some sort of downvoter bot on opposition to this idea (edit: though i don't know how sanctioned it was, or if it was an one off occurrence by a zealot) ). It didn't really dawn on me that this board is run by genuinely bad people who take money from mentally unstable (And they don't even have to be evil! Yudkowsky has a super rich friend Thiel!). And I only glossed over the QM stuff then, I don't harshly criticize things I didn't read. In any case this page really is no place for our discussion. You can continue e.g. on Kruel's blog. Dmytry (talk) 22:33, 22 January 2013 (UTC)
The page says "Talk:LessWrong" so I think it indeed is the place for discussing LessWrong and its participants, including former participants as it's interesting to know the reason of their departure (e.g. Lucidfox seems to have left because of the "transhumanist" aspect of LessWrong, but the transhumanist Kruel broke away because of people taking the basilisk too seriously). Your own split from LessWrong seems to have mainly originated in ethical objections in regards to their donation-gathering practices; which by the way I think you're misrepresenting. For starters, how many were those "mentally unstable people" - you call them "many", but do you have a number and how do you know they have donated to SI? Secondly, I've not seen any other forum direct people to GiveWell as much as LessWrong does EVEN THOUGH GiveWell doesn't rate SI as a priority donation target. Thirdly, some people at LessWrong have argued against the idea of donating only to the highest-expected-utility charity (again in upvoted articles) e.g. On Charities and Linear Utility. The argument against diversifying seems to have originated (or been popularized) in a Slate article long before LessWrong came into being Giving Your All. As as a sidenote Yudkowsky himself has suggested (though I can't find that particular comment right now, so I hope I'm not misrepresenting) that if one thinks existential risk from AI to be below 10%, one should probable donate money to something other than SI. Aris Katsaris (talk) 00:23, 23 January 2013 (UTC)
We are supposed to talk about the article here not about lesswrong itself, I believe, though I and everyone else been violating this a lot. (edit: actually you can talk about me on my talk page, just reply to this post there.) They did not have a lot of donors at the time of basilisk, and the mean is unlikely to be very far from an extreme assuming normal distribution. Yvain's post proves useful: replace Roko's basilisk in Yvain's critique above with 8 lives per dollar 'estimate'. Also: I have a pet butterfly whose flap of the wings has 1% chance of causing a hurricane that kills 800 people. That's the reductio ad absurdum of this estimating technique. What you linked doesn't seem to go against the idea of donating all to top charity, at a brief glance (but going all way to the bottom) it's just some fellow Russian fixing their sloppy math for them. And I never claimed that they invented it; they popularize that among their donors. My understanding is that this is useful for them because one either arrives at expected utility of 0 for donations (if one correctly accounts for poor predictability and possibilities of increasing risk by donating to people who seem convinced that AI is very dangerous as this opinion is more informative of the approaches which they consider viable, than AIs in general), or some ridiculous figure. So they want the small fraction that donates to them at all, to do all the donations to them. Dmytry (talk) 02:03, 23 January 2013 (UTC)
There's no rule that you have to talk about the article (and not the subject) on a talk page. Peter Droid whisperer 02:12, 23 January 2013 (UTC)
Ahh, good. I elaborated some more here. Note the link with these folks talking about 8 lives per dollar "estimate" (Vladimir Nesov and Michael Vassar are SI). The donors, while very much lacking in common sense, are not entirely stupid, and some would be weirded out if SI starts deleting all critique. And a board for recruitment of potential donors - the outermost recruitment circle - does by necessity have many outsiders that never donated to SI and never would donate to SI. Dmytry (talk) 19:37, 23 January 2013 (UTC)

Interesting new angle on Aris Katsaris's point: " If this supposed "cult" of ours desires to be so exclusionary, why were even posts of yours critical of LessWrong (like [4]) getting upvoted? " May be because they didn't delete it in time? (it's gone now). It was at 36 up 26 down - the software exposes vote count if you view the page source. I think it's just a product of what they describe as watering down problem: http://lesswrong.com/lw/ec2/preventing_discussion_from_being_watered_down_by/ . AFAIK Holden Karnofsky's critique had a significant number of downvotes as well - being high profile, they didn't want to burn the bridges, so it wasn't deleted, and a huge number of non-regulars upvoted it. Dmytry (talk) 13:49, 1 February 2013 (UTC)

LW going nuts with burning the evidence[edit]

c.f. examples. Time for a rewrite emphasising this angle - David Gerard (talk) 12:46, 1 February 2013 (UTC)

Some examples,
http://lesswrong.com/lw/bph/how_a_software_developer_became_less_concerned/#comments
http://lesswrong.com/lw/ds4/article_about_lw_faith_hope_and_singularity/73co?context=8#comments
http://lesswrong.com/r/discussion/lw/ct8/this_post_is_for_sacrificing_my_credibility/6qrg?context=6#comments
http://lesswrong.com/lw/dyk/selfskepticism_the_first_principle_of_rationality/7847?context=2#comments
http://lesswrong.com/lw/etf/firewalling_the_optimal_from_the_rational/7ljf?context=1#comments
http://lesswrong.com/lw/6ld/dealing_with_the_horrible_strategy/
http://lesswrong.com/r/discussion/lw/93o/would_a_fai_reward_us_for_helping_create_it/5jxx
And I only stumbled upon those by chance. I only read a very few comments of people to whose comments thread I subscribed. Which means that it is rather unlikely that those are all occasions of censorship. XiXiDu (talk) 15:16, 1 February 2013 (UTC)
Please go retrieve copies from Google Cache if they're still there, then capturebot them - David Gerard (talk) 14:56, 1 February 2013 (UTC)
I'm using wget and grep to find examples in a less biased manner. There's a lot of deleted comments. A common pattern is that someone from MIRI makes particularly arrogant/self praiseful claim (e.g. Luke claims extreme self scepticism skills), then some responses get deleted. Dmytry (talk) 12:37, 2 February 2013 (UTC)
paper-machine asked on LW if I know people can delete their own comments - sure they can, but that's not going to produce entire trees of 'comment deleted'. I stopped wgetting their pages because their site has been going offline lately. Dmytry (talk) 04:13, 3 February 2013 (UTC)

Official LW uncensored thread at reddit: http://www.reddit.com/r/LessWrong/comments/17y819/lw_uncensored_thread/ Aris Katsaris (talk) 11:43, 6 February 2013 (UTC)

Apart from being IMO a reasonable solution to the problem (EY wants to garden his forum seriously, drama queens like me yell about it), the uncensored thread is interesting in that EY is finally discussing the basilisk properly with people. Once this discussion peters out, it'll be worth examining our section to improve it - David Gerard (talk) 20:56, 6 February 2013 (UTC)
It's all around totally nuts. E.g. he says "I do not consider any of you lot to have any technical knowledge of this subject whatsoever; I'm still struggling to grasp these issues", then couple paragraphs later, he says "But it's still hard for me to comprehend what could possibly, possibly be going through your mind at the point where you ignore the notion that the tiny handful of people who can even try to write out formulas about this sort of thing, might be less confident than you in your arguments for reasons other than sheer stupidity.", then proceeds to assert that "There is no possible upside of talking about the Babyfucker whether it is true or false". Apparently, arguing against crazy bullshit that messes with people's heads (after knowledge of this bullshit has been spread by EY's mistake) doesn't have an upside to it. Yudkowsky also smugly told me, Mitchell Porter and few others on his authority that our counter arguments are flawed (which he previously merely implied by silently deleting entire trees of counter arguments). I'm so thankful he didn't tell why or i'd be eaten by the basilisk (that was sarcasm). David Gerard may recall that a few weeks back I came across the website of one guy that got severely mindfucked by this a couple years back (long before I heard of this), muflax. I noticed he had my product on his Steam wishlist, and gave him a free copy and told him on my authority that I consider basilisk to be crazy bullshit and not to worry about it. Sorry if this all sounds disjointed, there's just not a lot of logic to basilisk. Dmytry (talk) 23:52, 6 February 2013 (UTC)
This thread is finally allowing me to think through the basis of my own confidence that basilisk phobia is nonsense. Maybe I should write a philosophy paper about it. If we put the drama and paranoia of the past three years aside, there is a legitimate question here, that would be right at home in a philosophy journal, even if it looks unsightly as the obsession of a real-world AI think-tank: does the idea of acausal blackmail make sense at all, and specifically, could a human being have a justified true belief that they were being acausally blackmailed? My answers would be maybe yes, and almost certainly no, on the grounds that we can devise artificial scenarios where agents are justified in believing that there are other agents with which they can acausally "interact", but that human beings do not inhabit such a scenario. Mporter (talk) 12:03, 7 February 2013 (UTC)
Wow. "Basilisk phobia" and "acausal blackmail"? I've obviously been missing out on something.--Bob"I think you'll find it's more complicated than that." 12:19, 7 February 2013 (UTC)
Could we please start taking into account everything else that needs to be true for it to make sense to take the basilisk seriously? I know that even many of those people who are critical of MIRI are advocates of a technological singularity. Which doesn't make it easier to talk about this issue for me. I always have to talk about it under the given assumption that ridiculous amounts of technological magic are not only logical and physical possible and economically feasible but that everything will turn out in such a way that we'll create all that magic quickly enough as to not be able to adapt. But please consider for a moment what needs to be true and what needs to happen in order for the universe to ever see any agent that is capable and willing of doing everything necessary for the basilisk to work out in practice.
An agent with the resources and capabilities to somehow simulate a huge number of possible beings in such a way to draw action relevant conclusions about their decisions? Come on...please. This is just fantasy.
I initially posted a comment on that Reddit thread but decided to delete it. I just can't stand this bullshit anymore. I think I am going to turn my back on this whole issue. XiXiDu (talk) 12:20, 7 February 2013 (UTC)
I am beginning to get sick of it as well, I think some people simply pick counter-intuitive bullshit to pretend to see something in it to feel smarter. Anyway, a fairly general counter argument only reliant on AI not being easy to hurt: Basilisk hypothesis is that you could hurt superhuman AI by calculating some things but not donating. AI doesn't want to waste resources torturing you. You getting tortured is AI losing, too. Supposedly you can hurt superhuman artificial intelligences by thinking about them. Across the world boundaries, no less, spacelike distances, and so on. Ridiculous as ridiculous gets. Probably entirely reliant on inability to imagine superhuman AI that, ceteris paribus, would not want to torture anyone. Original Roko's thing first discussed extrapolated volition of mankind being vengeful - people are dicks and aren't rational. Dmytry (talk) 16:14, 7 February 2013 (UTC)
/r/skeptic meets the basilisk - David Gerard (talk) 10:52, 8 February 2013 (UTC)
Wrote a quick post for those who believe that Roko's basilisk might turn out to be true: How to defeat Roko’s basilisk and stop worrying - XiXiDu (talk) 12:00, 8 February 2013 (UTC)
I have now read just enough about "basilisk phobia" and "acausal blackmail" to dismiss it all as pretentious rambling. But am I making the mistake of missing the sophisticated theology behind all this?--Bob"I think you'll find it's more complicated than that." 19:45, 8 February 2013 (UTC)
You know, you should really read the Sequences - David Gerard (talk) 19:52, 8 February 2013 (UTC)
@Bob It is rather complicated. There is this whole sophisticated framework of beliefs (see the "Sequences" linked to above) underlying how those people think.
You can get a really quick overview by reading the following 3-part interview with Eliezer Yudkowsky by mathematician John Baez: Part 1, Part 2Part 3. Additionally you might want to check out my primer on AI risks. That should give you a pretty good overview.
The following quote by muflax might also shed some light on the whole issue:

I hate this whole rationality thing. If you actually take the basic assumptions of rationality seriously (as in Bayesian inference, complexity theory, algorithmic views of minds), you end up with an utterly insane universe full of mind-controlling superintelligences and impossible moral luck, and not a nice “let’s build an AI so we can fuck catgirls all day” universe. The worst that can happen is not the extinction of humanity or something that mundane – instead, you might piss off a whole pantheon of jealous gods and have to deal with them forever, or you might notice that this has already happened and you are already being computationally pwned, or that any bad state you can imagine exists. Modal fucking realism.

Most of the craziness is due to taking the math too seriously. Mistaking mere squiggles on paper with reality. Ignoring common sense. Taking ideas seriously. Outweighing tiny probabilities by conjecturing vast utilities. XiXiDu (talk) 20:07, 8 February 2013 (UTC)
Thank you all for taking the time to try to educate me. Sadly life is short, and I'm sure that my present misunderstanding of the concepts is more than sufficient for any real-world tasks which may come may way. (I suspect that it's sufficient for any unreal tasks as well, but I'll certainly read up on this if any unreal tasks appear.)--Bob"I think you'll find it's more complicated than that." 21:25, 8 February 2013 (UTC)
None of these posers are even qualified to be "taking the math seriously". They're taking seriously some idiotic garbage which they think correct "rational" math is - entirely unjustified (there isn't any optimality argument of any kind for TDT and they just verbosified "superrationality"). Keep in mind that none of these folks published any papers in peer reviewed mathematical journals, wrote any interesting software, has any good degrees in math, or anything of that kind what so ever. Bob is absolutely, one hundred percent correct in dismissing it as pretentious garbage which it is. Being misexplained as "taking the math too seriously" is what they are after. Yudkowsky does not have a degree, novel results, or anything of that kind - how could anyone think he's worth giving money to? He expresses dire concern over shit like this and making people think, ohh, that guy REALLY takes math seriously. Dmytry (talk) 09:17, 9 February 2013 (UTC)
What I meant by "taking the math too seriously" is that they mistake certain ideas for binding laws when they are actually mere squiggles on paper that can be ignored. You are not forced to one-box faced with Newcomb's problem. You can tell Omega to fuck off and refuse to play the game. It is not necessarily irrational to play the lottery etc.
Another aspect of "taking the math too seriously" is that much of their math is intractable, practically useless. Yet they act as if they are actually doing what the math commands them to do when they are really just rationalizing their beliefs.
Here is a quote by Greg Egan (the science fiction author), something he wrote in an email conversation, which better highlights what I meant:

I realise that they have a lot of impressive-sounding jargon and slogans, and not *everything* they say is false and foolish, but in my view they've just sprinkled enough mathematics and logic over their fantasies to give them a veneer of respectability.

See the following posts I wrote: 1.) Asteroids, AI and what’s wrong with rationality 2.) Rationality? Come on, this is serious! - XiXiDu (talk) 09:39, 9 February 2013 (UTC)

Something new every day[edit]

I don't think I've ever seen a self-reverting wandal before. Hydrogen and Time (talk) 13:13, 7 February 2013 (UTC)

Maybe it was an acausal vandal. Where? I don't see it in article's history... edit: ahh, talk history. lol. Dmytry (talk) 08:22, 8 February 2013 (UTC)