Talk:LessWrong/Archive2

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 3 May 2016. Please do not make edits to this page.
Archives for this talk page: , (new)(back)

I'm almost afraid to mention these, but...[edit]

You have to see it for yourself. And this, too. And also the post that the second comment is in response to. I should hasten to add that I am putting these here as (mild) criticisms of the LW community, not of Yudkowsky - having watched the speech that the second picture is from, I can attest that he meant it as a joke. Tetronian you're clueless 21:54, 11 October 2010 (UTC)

Coffee spray.gif I'm pretty sure it's taking the piss (and remarkably well!), but you can only really take the piss that way when you have a bit of a personality cult going, a lighthearted one, but one nonetheless.Scarlet A.pngmoral 22:02, 11 October 2010 (UTC)
Eliezer Yudkowsky can make Chuck Norris shave his beard off by using text-only communication! Scarlet A.pngmoral 22:17, 11 October 2010 (UTC)
Haha! I think that's cleverer than half the ones on there. Tetronian you're clueless 22:50, 11 October 2010 (UTC)
Looking at the post that my first link is to, I can definitely see why the personality cult exists: EY gave his Overcoming Bias/LessWrong readers an interesting blog post every day for like two years. That's got to have serious effects on you if you were with it from the beginning. Tetronian you're clueless 23:05, 11 October 2010 (UTC)
Did you know he saved the world, too? Here's the half of the link (I'm an anon, so I'm not putting the whole): /lw/2lr/the_importance_of_selfdoubt/2ia7?c=1
Link. It's amusing in that the guy Eneasz is lecturing there, multifoliaterose, is IRL John Baez, who knows EY well and happens to be the author of this - David Gerard (talk) 23:58, 1 December 2010 (UTC)
Whoops, I'm wrong - they're different people - David Gerard (talk) 00:02, 2 December 2010 (UTC)

some Pottery guy fanfic[edit]

seriously, the ref is to a bunch of crap. Can it at least be quoted in it's interesting regards? ħumanUser talk:Human 09:42, 26 October 2010 (UTC)

Well roll it back like you to do every other edit you disagree with. Scarlet A.pngmoral 09:46, 26 October 2010 (UTC)
I deredded the link to his name and pasted it in there, where it makes more sense. So there :P ħumanUser talk:Human 09:50, 26 October 2010 (UTC)
The fanfic is linked in one of the footnotes. It might be more relevant in an article on EY, but I submit that a footnote is quite sufficient in an article on LessWrong itself - David Gerard (talk) 21:55, 26 October 2010 (UTC)
82.x.x.x just added it again as an EL, I copied it to the EY article, then saw the redundancy. I don;t see how EY's fanfic really belongs in the LW article. It's about him, not his fan club. ħumanUser talk:Human 09:59, 27 October 2010 (UTC)
His FF.net username is "LessWrong", he explicitly wrote it as propaganda and LessWrong readers are pretty much expected to have read it, so I think it's relevant - David Gerard (talk) 11:36, 27 October 2010 (UTC)
We have article on his name. See also?? ħumanUser talk:Human 11:39, 27 October 2010 (UTC)
OK, I see your point. But it must be made better. ħumanUser talk:Human 11:40, 27 October 2010 (UTC)

o_0[edit]

This is a truly remarkable sentence. The explanations don't actually help make it seem much more connected to the real world. Or is it just me? - David Gerard (talk) 23:50, 13 November 2010 (UTC)

Alicorn explained it pretty well, I thought. As did Perplexed. Tetronian you're clueless 01:55, 14 November 2010 (UTC)
I should add that (to the best of my understanding) the obsession with acausal relationships on LW comes from Eliezer's "Timeless Decision Theory" and Wei Dai's (another top contributor) "Updateless Decision Theory." Both of which are acausal. I've read some of EY's paper on TDT and a few of the UDT posts, I can try to explain if you like. Tetronian you're clueless 01:57, 14 November 2010 (UTC)
Oh, and here is what the sentence means: If you are the kind of person who saves murderers and murderers know this, then this means that at any time you will/have decreased the disincentive for murderers to murder. I'm not sure how it would pay rent, but the LW crowd buys it because it is a consequence of TDT/UDT. Tetronian you're clueless 02:03, 14 November 2010 (UTC)
They are either far too smart or far too stupid for me to fully comprehend. I have not decided which yet. ħumanUser talk:Human 02:44, 14 November 2010 (UTC)
Meh, I don't think its a question of smart vs. stupid. They've developed a whole new decision theory that's pretty alien to the rest of us, so most of the things they say seem weird. Tetronian you're clueless 03:13, 14 November 2010 (UTC)
Weird, yes, but are they intelligent ideas, or just mental masturbation? ħumanUser talk:Human 03:39, 14 November 2010 (UTC)
I'm not entirely sure what you mean by "metal masturbation." I would say that a lot of the discussion about rationality and cognitive biases is intelligent; some but emphatically not all of the discussion about AI is as well. The stuff about "luminosity" and self-help is definitely highly questionable. Tetronian you're clueless 04:28, 14 November 2010 (UTC)
And the bits where they casually throw physics and chemistry out the window and talk about nanobots as if they can exist is useful standalone evidence of susceptibility to blithering idiocy. Sentences like the one I quote are pretty good though. The explanation tells me what they're getting at, but doesn't actually make it less stupid - David Gerard (talk) 19:45, 14 November 2010 (UTC)

The few times they are making any sort of sense, it's them talking in really confusing way about things that anyone knows via common sense. Every idiot knows that e.g. death sentence for murder deters some of the crime; the complete idiots don't understand that it doesn't work via some backward 'causality', the people who think in terms of a fairly stupid decision theory apparently don't either Dmytry (talk) 15:45, 22 May 2012 (UTC)

Sorry but...[edit]

...I have now read a handful of various threads over at Less Wingnutty, and I am beginning to wonder. I have seen lots of examples of "smart" writing, but have yet to encounter one of "intelligent" writing. Yes, I know they use their own jargon and live in their own paradigm, but that's no excuse for being a bunch of scholastic idiots who think they matter somehow. Did I miss something somewhere? ħumanUser talk:Human 02:09, 15 November 2010 (UTC)

Pretty much nailed it. The only good LW posts are those about rationality, but nowadays its mainly focused on AI and all that other crazy stuff, so it makes for a very bizarre read. Plus, the decision theory stuff used to make sense - now it's so far out that I can no longer follow it. Tetronian you're clueless 02:12, 15 November 2010 (UTC)
Thanks, I worried that I was either going crazy, or getting far stupider than the alcohol would account for. ħumanUser talk:Human 02:25, 15 November 2010 (UTC)
The way that discussions work on LW can be explained in a few ways:
  1. Excessive jargon that takes a lot of practice to unpack
  2. Discussions based upon a rather far-reaching premise that most people wouldn't accept
  3. Esoteric discussions that are hard to understand without knowing a lot about math, decision theory, and most of all the exalted sequences
  4. Poor writing (in terms of clarity)
So, basically, unless you've been reading for a while or you spend a lot of time there, it can be really hard to figure out what's going on. I remember when I first started reading LW, I felt like I was way in above my depth. Tetronian you're clueless 02:28, 15 November 2010 (UTC)
And now you feel like what? You are attending a kindergarten for HF autistic kids? ħumanUser talk:Human 02:44, 15 November 2010 (UTC)
As far as the AI stuff goes, I feel like I'm watching a community that says, "Ok, let's explore the possibilities if we accept that fairies make the sun go down every night." (And everyone nods in agreement.) The jargon doesn't really affect me any more, so I see the content rather than the style. There is some interesting stuff, but most of it is in a really, really bizarre framework. Tetronian you're clueless 02:50, 15 November 2010 (UTC)
I still find most of it impenetrable - although, that said, I haven't given it a proper good go yet - so I mostly stick to Yudkowsky's posts on rationalism. Say what you like about the esoteric AI stuff, but that man knows his shit when it comes to cognitive biases and thinking. Scarlet A.pngmoral 09:32, 15 November 2010 (UTC)
I'm fine with the AI stuff - it's an eminently plausible problem, but not one I'm smart enough nor interested in enough to help with directly personally. So knowing someone else is thinking deeply on it is fine by me - David Gerard (talk) 15:12, 24 November 2010 (UTC)
Yeah, sure, but what is they are Completely Wrong? Or just fucking stupid? ħumanUser talk:Human 05:32, 25 November 2010 (UTC)
If they're wrong, the consequences are minimal. I believe we have a wiki here about people who pursue ideas past the point of actual wrongness - David Gerard (talk) 10:22, 25 November 2010 (UTC)
But if you're not thinking about it and contributing won't a future AI punish you for your blasé and feckless attitude? Scarlet A.pngmoral 15:23, 24 November 2010 (UTC)
Only if it reads LessWrong, apparently - David Gerard (talk) 16:01, 24 November 2010 (UTC)
So we need to tell the superhuman AI its own idea? Scarlet A.pngmoral 16:10, 24 November 2010 (UTC)
So it seems. And hope it'll never read I Have No Mouth And I Must Scream - David Gerard (talk) 16:56, 24 November 2010 (UTC)

Local tropes[edit]

I have added a list of the stuff that's struck me as local distinct weirdness. Could probably do with being phrased even more nicely. Also needs a sober and understated single sentence precis of what TDT is. - David Gerard (talk) 21:30, 27 November 2010 (UTC)

Timeless Decision Theory is in fact independent of many-worlds, edited out the bit that said otherwise. Bo (talk) 15:50, 28 November 2010 (UTC)
So what are the ingredients, in your own words? - David Gerard (talk) 17:09, 28 November 2010 (UTC)
"Acting as if you're controlling the output of the platonic computation that you implement", is one bit of poetry that comes to mind. If you really want to get into TDT, here's a nice leisurely Yudkowskian introduction to the issues that doesn't really have prerequisites: http://singinst.org/upload/TDT-v01o.pdf Bo (talk) 19:16, 28 November 2010 (UTC)
I also had a look at the emergence thing. It seemed to initially imply that he denied the existence of emergent phenomena but I'm not sure if that's the actual case; he's just against using it as an explanation rather than a description according to that link. That said, he does buy into cryonics and has admitted in a few videos that he's a reductionist so I honestly wouldn't be surprised if he thought the entire concept of emergence was wrong. Scarlet A.pngmoral 17:06, 28 November 2010 (UTC)
It's treated as disallowed vocabulary. I intended the section to list local weirdnesses with references, not necessarily to argue them - David Gerard (talk) 17:09, 28 November 2010 (UTC)
He (EY) thinks "emergence" is being used as a magic word because it doesn't refer to a precise process or phenomenon. He's not against the idea of emergent phenomenon, he just thinks that people tend to use the word to explain things when they really don't know what they're talking about. Tetronian you're clueless 18:11, 28 November 2010 (UTC)
Also: TDT Tetronian you're clueless 18:12, 28 November 2010 (UTC)
That's it - it was intended to solve impossible word problems - David Gerard (talk) 19:15, 28 November 2010 (UTC)
Haha, exactly. Though I suppose it is possible to have something like Newcomb's problem in real life (although you would need either a really simple decision agent or a really really smart person/AI to be Omega), but I don't see why we need to revamp decision theory just because of it. CDT (causal decision theory, the standard one) works except in cases like Newcomb, where you've got an agent that can predict your actions. In those cases we can make exceptions and just use EDT (evidential decision theory). Eliezer even admits this in his ridiculously long TDT paper, but then he goes and re-invents decision theory anyway. Tetronian you're clueless 19:42, 28 November 2010 (UTC)

I'm not gonna edit right away because one of my edits was reverted earlier. So I'll make a couple of points here.

- Yudkowsky's declaration that MW is correct is actually an index into the posts making up his lengthy argument that MW is correct.

- Newcomblike problems do not violate causality, regardless of whether the existence of Omega does. To realize this, just realize that people can predict other people's actions in everyday life and see Parfit's Hitchhiker and this post. Bo (talk) 19:27, 28 November 2010 (UTC)

Also, TDT isn't the only homegrown decision theory circulating on LW. See the list on this wiki page. Bo (talk) 19:34, 28 November 2010 (UTC)
UDT messes with my head even more than TDT. All of the posts that try to "explain" it are really poorly written and only make it worse. Tetronian you're clueless 19:43, 28 November 2010 (UTC)
TDT is the daddy one - David Gerard (talk) 19:47, 28 November 2010 (UTC)
AFAIK, Wei Dai invented UDT independently. Bo (talk) 20:15, 28 November 2010 (UTC)
Uh, are you saying that making lengthy argument and phrasing his conclusion as a declaration is not declaring? It looks like one to me. What's your working definition of "declares" that this does not fit?
I'll take your word for the Newcomb's one. The impossible hypothetical philosophical abstraction invented to explore edge cases and create the philosophical equivalent of mathematical singularities being taken seriously as if it could actually exist (that link also conflates counterfactual-mugging-susceptible decision theories and many worlds, which may have confused me as to TDT and MWI) is sufficiently surprising to a newcomer to be worth note - David Gerard (talk) 19:47, 28 November 2010 (UTC)
The article makes it sound like everyone is going along with it just because Yudkowsky declared it. For all the ways that LW does resemble a personality cult, this isn't one.
Also, it's funny that you say susceptible. I'd say compatible :). Bo (talk) 20:12, 28 November 2010 (UTC)
Yes indeed, I do think the decision theory weirdness would not be taken seriously without the going-along effect you describe. The "emergent" one is quite definitely personality-culting. I wonder how many oracular-looking posts by EY that have become commonplaces were reactions to an AI researcher that had annoyed him that day. Perhaps he's just been unusually lucky in what and who's annoyed him. In any case, these are odd things that make even interested newcomers go "what the shit", as illustrated by this talk page. Just imagine how someone less primed might react - David Gerard (talk) 21:40, 28 November 2010 (UTC)
I think the emergence example is overused. Though it is a good demonstration of a mysterious answer, which is an important and valid insight from EY, "emergence" isn't even used the way EY says it is in philosophy and some areas of AI. Unfortunately, the LW crowd uses it as a strawman, probably because it's more subtle than saying "magic," and beats it to death. Btw, I believe the idea first came from here (scroll waaaayyy down). Tetronian you're clueless 02:15, 2 December 2010 (UTC)

No, Omega is a fairy story for philosophers. Crazy assholes, however, exist.[edit]

What sort of bollocks is claiming this is a person with the predictive power of Omega, rather than plain old hooking up with a slow-boiling crazy person who has a script for your life that they're determined to fit you into? What the unlikely fuck? How the hell is the answer to this problem not "get the hell out of the twisted manipulative relationship" but "invent timeless decision theory"? - David Gerard (talk) 01:54, 29 November 2010 (UTC)

The problem with the scenario is that real people can't just change their values/preferences based on a utility calculation. You can't just make a proposal sincere because you calculate that it will increase the probability of being happier. People don't work that way. I think. Tetronian you're clueless 02:09, 2 December 2010 (UTC)

I'm sure these won't be needed[edit]

and none ofimgthese commentsimg will disappear - David Gerard (talk) 09:36, 8 December 2010 (UTC)

Full HCM. I have dived in to be as helpful as I can - David Gerard (talk) 14:06, 9 December 2010 (UTC)

At first it doesn't look like it, but then if you scroll down to the bottom... Tetronian you're clueless 16:13, 9 December 2010 (UTC)
Liveblogging some of the HCM: zing! Tetronian you're clueless 16:33, 9 December 2010 (UTC)

Timeless physics[edit]

LessWrong is the top three Google hits on the term "timeless physics"; EY slings the terminology around as if it's a standard accepted thing in physics, but not even Wikipedia has an article as yet. Perhaps RW needs one, if someone knows enough to write one. For now, it's another LW idiosyncracy - David Gerard (talk) 15:08, 19 December 2010 (UTC)

What is it? Blancmange (talk) 20:38, 19 December 2010 (UTC)
[1] - at least it's covered in disclaimers - David Gerard (talk) 11:37, 20 December 2010 (UTC)

Hell[edit]

I think there is a massive problem with this section in that people are just interested in sticking in their one-liners about thought-police or "hell" and so on that the paragraph no longer contained any information whatsoever. I know the thing they're talking about is a little esoteric but that doesn't mean it can't be explained in blunt terms. I've removed it pending it being rewritten from scratch in terms that are actually readable. Scarlet A.pngmoral 14:16, 2 January 2011 (UTC)

Updated to simply note that on-topic posts have been censored. This is something that has bothered many LW posters, myself included, and does deserve mention either in "the bad" or in "the ugly". Please state an actual reason if you re-delete (David Gerard, I'm looking at you here :p). -Waitingforgodel 19:17, 3 January 2011 (PST)

The reason is that the entire thing was so cryptic (and it still is) that it doesn't inform anyone. The post is tl;dr and anyone who doesn't know LW inside and out won't be able to understand the first thing about it. It needs to be explained thoroughly and properly; what is the idea, who said it, why is it banned. The last incarnation didn't mention any of that but merely said something along the lines of "rationalists have their own version of hell". Or are we True Believers in this so aren't allowed to talk about it here either? Scarlet A.pngmoral 08:19, 4 January 2011 (UTC)
Agree. This is a very fair critique. My understanding, based on Mr. Gerard's comments below, is that it would avoid conflict if someone else wrote this section. Any volenteers? I can provide links to the deleted post, EYs reaction, and several later posts by EY about why he did it. -Waitingforgodel 9:08, 4 January 2011 (PST)
So where are those "links"? Why not simply "provide" them? Thank you for your cooperation. ħumanUser talk:Human 06:31, 5 January 2011 (UTC)
Does this mean you're volunteering to write the description? Getting a more complete list than simply: the censored post, this thread, and EYs responses here would take more time than I have right now. Anyone else have better (or more specific) references? I'm sure there are many posts missing from this list... - Waitingforgodel 10:42, 5 January 2011
No, it means I am asking you to come through with the links you offered. Thank you. ħumanUser talk:Human 06:14, 6 January 2011 (UTC)
I think what waitingforgodel means is that he decided to act like a troll about it and is now upset that he got downvoted to the centre of the earth. Anyone but waitingforgodel needs to write this up - David Gerard (talk) 15:34, 4 January 2011 (UTC)
If you'd like to question my motives, that's fine, but this doesn't address why you feel that "Many on-topic posts have been censored from LessWrong, though so far they have all been related to a single post." is somehow upset or biased -- it looks to me like a minimum acknowledgement that the censoring occurred, which otherwise the page would lack. Please respond with a critique of the content, not of my character :/ -Waitingforgodel 9:08, 4 January 2011 (PST)
Linking directly to the post without a warning (there will be susceptible LessWrong readers reading this, and even if you think they're stupid it's rude) is just a little trolly of you. Try drafting it on the talk page if you're so intent - David Gerard (talk) 22:26, 4 January 2011 (UTC)
Okay, how about this, stop the bickering and, either or both of you, attempt to write a paragraph about it that goes to the following brief: 1) states clearly what the post suggests 2) states clearly why the post was deleted 3) can be understood by someone who doesn't spend the majority of their waking hours reading LW 4) doesn't mention the words "hell" or "Cthulu". I know there's something about being aware about the idea that a machine will one day torture you in revenge, or something, but let's make the assumption that we're all Singularity Atheists here. As DG suggests, put a suggestion on the talk page. Scarlet A.pngmoral 23:07, 4 January 2011 (UTC)


Okay, here's my third draft at a simple+NPOV description of what happened:

Eliezer Yudkowsly, and LessWrong at large, are interested in causality that goes backwards in time; ie. future events causing past events[1][2]. Most examples involve simulating what someone will do in the future, and then using that information in the present. A concrete example is not giving a gun to someone you believe will shoot you. These ideas get stranger when you imagine a super-human intelligence simulating a human level intelligence, because their predictions become near prefect[3].

These thoughts led to LessWrong's first censorship when Roko (a top contributor at the time) wondered if future AIs would punish people who didn't donate all they could to AI research. Roko reasoned that every day the world waits for AI very bad things happen (150,000+ people die every day, war is fought, millions go hungry, etc). Because future AIs will want to prevent these bad things from happening, they might punish those who understood the importance of donating but didn't donate all they could. Roko then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish them.

That final thought proved too much for one LessWrong reader, who began to have nightmares about being tortured for not donating enough to SIAI. Eliezer Yudkowsly replied to Roko's post calling him names and claiming that posting such things on the internet could have caused incalculable harm to the future of humanity. Four hours later, Eliezer Yudkowsly deleted Roko's post including all comments (though a google cache of it survived - please do not read if you are prone to nightmares and/or are worried about causing incalculable harm to the future of humanity). Roko decided to leave LessWrong, and deleted his thousands of posts and comments[4].

Whaddaya think? - Waitingforgodel 21:17, 4 January 2011 (PST)

This summary has an *extremely* important distortion. It leaves out the reason why people think the article was a basilisk entirely! It was *not* suggested that a future AI would punish people who "understood the importance of donating but didn't donate all they could"; there were two more conditions that would have to hold. First, the person would have to model the AI well enough that whether the AI actually punished them was correlated with their prediction of whether it would or not. Second, they would have to adjust their behavior in a direction the AI liked if and only if they predicted that the AI would punish them if they didn't. These conditions would not be met for anyone who was unfamiliar with the idea, so people who don't read the article are immune. Jimrandomh (talk) 01:27, 6 January 2011 (UTC)
Here's a quote from the banned post EXPLICITLY saying people may be punished for not donating all they can: "In this vein, there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation" (line 25). - Waitingforgodel 18:57, 5 January 2011 (PST)
Hmm. I'm having a hard time understanding this... If understanding the AI is a requirement, and no one knows how to build AI, why would anyone think this is a dangerous idea? If the requirement is merely understanding what sort of strategy the AI is likely to employ (in this case Roko is talking about a perfect democracy, CEV, of all humans), why is torture for not donating enough somehow not seen as a risk? We have good samaritan laws because a large percentage of the population feels that seeing a problem obligates you to fix it... Thanks, -Waitingforgodel 6:10 5 January 2011 (PST)
It seems that you do not understand the relevant decision theory at all. You really should've noticed that before you took its being suppressed personally. Jimrandomh (talk) 02:54, 6 January 2011 (UTC)
I didn't say I had trouble understanding Roko's post -- I had trouble understanding how you got what you wrote from Roko's post. Here's the paragraph I was simplifying from (the meat of Roko's post)... it can be found on line 25 on the pastie link: "In this vein, there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. This would act as an incentive to get people to donate more to reducing existential risk, and thereby increase the chances of a positive singularity. This seems to be what CEV (coherent extrapolated volition of humanity) might do if it were an acausal decision-maker.1 So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. Of course this would be unjust, but is the kind of unjust thing that is oh-so-very utilitarian. It is a concrete example of how falling for the just world fallacy might backfire on a person with respect to existential risk, especially against people who were implicitly or explicitly expecting some reward for their efforts in the future. And even if you only think that the probability of this happening is 1%, note that the probability of a CEV doing this to a random person who would casually brush off talk of existential risks as "nonsense" is essentially zero." If you still feel this wasn't fairly translated into my text, please suggest some concrete edits for discussion. - Waitingforgodel 19:41, 5 January 2011
Hmm, that's one of the hazards of not having a post history to go off of, it's hard to keep track of what's been explained where. The exact conditions under which an AI would choose to torture are laid out later, not in the original post, in a text which you might or might not have seen but which I'm not at liberty to publish. Insofar as the original post suggested that an AI would torture someone solely on the basis that they knew about existential risk but chose not to do anything about it, the post is incorrect. Jimrandomh (talk) 03:56, 6 January 2011 (UTC)
1. I find it highly unlikely anyone knows why a superhuman intelligence might or might not choose to torture someone... but I really want to read those posts! :p if anyone has said posts please email them to me (aitorture.wfg@xoxy.net). 2. Does this mean you now agree that this is a correct summary? Thanks, - Waitingforgodel 20:04, 5 January 2011
I thought you had collected all the deleted content from Google's cache? Jimrandomh (talk) 04:18, 6 January 2011 (UTC)
Nope, I got my copy of the google cache from this version of the webpage, when the pastie expired, I reposted it. Waitingforgodel (talk) 04:46, 6 January 2011 (UTC)
Also - I think you should read this before accepting anything Waitingforgodel writes without a dump-truck full of salt. Also, Roko returned to Less Wrong some time later, as FormallyKnownAsRoko, and claimed that he deleted the post himself, albeit with prompting from Eliezer. Jimrandomh (talk) 01:36, 6 January 2011 (UTC)
Sure, I'd like to see this section returned because EY's censorship bothered me personally; people spend time on things they care about -- we already know that. re: was it banned... this sure sounds like he banned the post... that's the last comment in the thread I can find. On a related note, EY has deleted countless other posts asking about the topic, which should probably be mentioned. - Waitingforgodel 5:56, 5 January 2011
Perfect, actually. It makes more sense now you've added that part about backwards causality, it adds good context to the argument. Scarlet A.pngmoral 18:49, 5 January 2011 (UTC)
Thanks! What's left before we can include this in the main page? - Waitingforgodel 10:54, 5 January 2011 (PST)
Needs to be at least a paragraph shorter, and not include a link to the forbidden post without warning. (We think it's silly, but LW readers do come here a lot.) Also, that the OCD sufferers were donors is news - that'd definitely need citation. Where'd you get that from? - David Gerard (talk) 21:00, 5 January 2011 (UTC)
Why does the link to the forbidden post warrant a disclaimer when the idea that would upset them is already expressed in the paragraph preceding it? If we're really worried about the LessWrongians' peace of mind, maybe there should simply be a spoiler tag wrapped around the part with Roko's idea? Röstigraben (talk) 21:19, 5 January 2011 (UTC)
I'll go stick a warning on the homeopathy article, shall I? Scarlet A.pngmoral 21:32, 5 January 2011 (UTC)
LOL. I like the combo of these two ideas: a homeopathy spoiler section explaining how it doesn't work, that someone doesn't read until they start to get frustrated by their medicine not working :p - Waitingforgodel 15:07, 5 January (PST)
Thanks for the correction. It was ONE "SIAI affiliated" person according to the post. I'm not sure what can be cut w/o sending this content back to incomprehensible. Anyone? I'd assumed that talking about the nightmares before putting the link counted as sufficient warning. It's also hard to make a warning not increase the number of people who click a link... and talk about "incalculable damage to the future of humanity" without it sounding sarcastic... here goes nothing... - Waitingforgodel 14:04, 5 January 2011


If there's interest I could also expand this into a section (LW users have speculated that EY censored the post to give it more widespread attention, have wondered why EY wasn't worried about future AIs punishing him for preventing people from wondering if future AIs would punish them, EY has commented on the moderation policy for the foreseeable future, many users have posted "what happened?" threads and received varying answers re: why, a look at why EY thinks this is a real threat vs. what more traditional thinkers think about it). Do you think it warrants a section, or is that overkill? - Waitingforgodel 15:07, 5 January (PST)

I, for one, am not afraid of my imaginary friends. But please go ahead, if you think it would be a good section. ħumanUser talk:Human 06:28, 6 January 2011 (UTC)
Providing the magnitude of the situation isn't exaggerated, I'd be all for sectioning it. I mean, it's a genuinely interesting thing to think of. Although perhaps a more thorough explanation of the problem itself would be better off in a separate article on singularity AI. Scarlet A.pngmoral 08:55, 6 January 2011 (UTC)
Okay great! It will be factual and interesting, it's impact on won't be exaggerated -- tentitavely titled "The censored" for positioning under "The ugly". A more broad look will (rightly) probably create a much longer discussion here due to the increased number of primary sources and the increased amount of discretion when describing so much LW discussion. I'm going through the logs I have and the LW site in general (thankfully, LWs own archives still contain discussion *around* the censored topic), and will post a first draft here Monday evening PST. I have no idea how long discussion will go, but based on how long this one took my guess is quite a wile. Waitingforgodel (talk) 20:21, 6 January 2011 (UTC)

In the mean time, since it looks like all objections have been retracted or addressed (we even have a warning), and the section looks about two weeks out, can we put the above description in "the ugly"? Waitingforgodel (talk) 20:21, 6 January 2011 (UTC)

Okay, so after 25 hours I'm guessing that no one has any objections (as they hopefully would have replied), but no one feels like they have the authority to say "post it" given that others may object. I'm moving it to "the ugly" in good faith, please correct me if this is mistaken. - Waitingforgodel (talk) 21:35, 7 January 2011 (UTC)
Not bad. I have tightened it slightly. I have also given the punchline to the story, in which the valiant fighter for Internet freedom protested by making a threat anyone not an LW Kool-Aid mainliner would consider approximately insane. (I suppose the valiant freedom fighter then bringing it here counts as "bringing the Good News for Modern Rationalists, without clearing their local jargon cache".) - David Gerard (talk) 23:43, 7 January 2011 (UTC)
LOL, thanks :p. I've added some references. Will check back here while working on draft one of the section over the weekend. - Waitingforgodel (talk) 01:45, 8 January 2011 (UTC)

Here is my understanding of the Roko thing:

Basically, in attempt to create a decision theory that one boxes on Newcomb's paradox ( http://en.wikipedia.org/wiki/Newcomb%27s_paradox ), they made a 'decision theory' (not a proper theory just some babble, like verbally defined theories in physics). If you act by this theory, you would one box even if the boxes were both transparent, created before you were born, and you could just look and see that there is no million. They never bothered to check if their homebrew decision theory works on such a case, just like crank physicists never bother to check if they can even get approximately-Newtonian gravitation. Precisely same phenomenon. Intellectual wankers don't think beyond what brings pleasure though 'amazing insight feels good' pathway. This produces intellectual cum, which serves as porn for other intellectual wankers.

Roko reasoned that this same decision theory would make FAI "precommit" (post-factum) to waste resources on torturing you for not doing enough to help it come around. Hilarity ensured. Similar to a crank with nonworking physics calculating that LHC would create a black hole and kill everyone, but with far less sense or intelligence or intellectual honesty or understanding what 'theory' is, and how ill defined ideas aren't theories

The censure has resulted in lack of any explanation why it is complete bullshit, and lack of counter-bullshit for those who can only follow a bullshit argument, and had nightmares. The EY's response also indicated that he either took it seriously prima face (not just deleted it because it was giving nightmares), which is nuts, or just pretended to, which is plain evil. In any case he wouldn't have hesitation taking extra donations from those mind-fucked by this. Dmytry (talk) 08:12, 19 May 2012 (UTC)

Lads, it's looking really quantum[edit]

I'm just reading the quantum physics sequence. This is one where EY explains quantumumum his way and how it hooks into his philosophies on consciousness, identity and so forth. He explains it pretty lucidly and I read it and feel like I more or less get it ... but then I read the comments, and see physicists who are less than thrilled with his ways of putting things and the assumptions he appears to be bringing along as baggage without noting that his interpretations are his rather than the normative ones.

Are there any physicists in the house who have a chunk of spare time and feel up to looking it over? The explanations are, as I said, apparently lucid - EY is a big fan of Feynman, as any sensible person is - but some expert sceptical opinionation would be informative - David Gerard (talk) 21:24, 21 January 2011 (UTC)

I have a working knowledge, although not as an actual physicist. Scarlet A.pngmoral 22:55, 21 January 2011 (UTC)
That's a good start - you're a nanoscale chemist, right, so you actually do this for a living? - now you just have to go through a LONG string of posts, each of which is a tab explosion, and the comments ;-)
The quantum sequence appears so far to be where a lot of the weirdy bits are justified. I'm getting little sceptical alarm bells I can't quite put a finger on going off as I read, which is why I'd like an expert opinion on basic quantum and EY's version.
(I am in fact ploughing through the Sequences. FUCKING HELL THEY'RE LONG. I'm doing this strictly in my "internet as television" time budget, or I'd just never bother. At least EY is mostly a good writer.) - David Gerard (talk) 23:11, 21 January 2011 (UTC)
Well, from your brief description I imagine that the QM sequence (and I do hate to form an opinion about it without reading it properly, but can't help it) is probably delving in to the interpretations of quantum mechanics rather than the actual theory itself. This is because no matter what you do, if you're intellectually honest - which I believe Yudkowsky is - then you can't change the facts. So I imagine what you're reading is more pseudo-philosophy than science (it'd be very important not to confuse those two, even if they're talking about quantum mechanics it's not necessarily empirical rationalism). There is no supporting evidence for the interpretations besides some really silly thought experiments that don't really prove or disprove anything. If you work on the interpretations right you can get QM to say whatever the hell you want and you end up with nothing more scientifically rigourous or objectively true than just a slightly more aloof version of What The Bleep Do We Know?. Besides, all the interpretations make the fundamental mistake of assuming that the quantum world must be classical underneath and must "make sense". Then again, I'm pretty sure I'm officially an instrumentalist now (Herr Doktor Popper can go f**k himself) and if anyone starts talking about many worlds and particles disappearing I just tell them to shut up and come back with an experiment - anything else is just bullshit. Anyway, that's my instinct about what they'll be talking about and my "prior" (ha!) out of the way. I'll give it a read probably in the week while I'm doing some stuff. Scarlet A.pngmoral 13:41, 22 January 2011 (UTC)
Also, you do realise that "nanoscale chhemist" is sort of like saying "physicist that studies reality", right? :P Scarlet A.pngmoral 13:43, 22 January 2011 (UTC)
Yes, yes, you're all jam doughnuts. You should add that paragraph or something very like it to quantum woo - David Gerard (talk) 17:56, 22 January 2011 (UTC)
Oh, and EY's is distinguished by not saying the world is classical underneath. Also, it's a better interpretation because Occam and Solomonoff and Kolmogorov say it's more elegant. And just happens to give the philosophical implications he wants. I'm still slowly dredging through while trying to keep track of the maths, so there may be more than that to it - David Gerard (talk) 23:50, 22 January 2011 (UTC)
I've adapted your para above to Quantum mechanics#Interpretations - David Gerard (talk) 17:30, 23 January 2011 (UTC)
Well, have started the sequence and tried to navigate around it... interesting and very true tidbits are scattered throughout, but ultimately if it's supposed to be a coherent introduction and explanation it's explained ass-first. I can find the part where he says "Many Worlds is true" but can't find the part where he says "here's my evidence that Many Worlds is true". I can find the part where he says "here's my intuitive explanation for students that are confused" but can't find the actual explanation anywhere! No, really. I can see he writes that you shouldn't start teaching quantum mechanics by discussing particle-wave duality but it seems like his alternative starting point is to ramble on about the nature of reality instead. No, no, no, no, no! Teach the facts; let those settle; then discuss the implications. I've had a click and read through what look like the key pieces, but he at no point seems to have stood back to analyse everything and then write out his thoughts it a coherent manner. Fair enough it's supposed to be an collection of vaguely categorised blog posts (that you'd keep up with if you read them weekly for 5 years) rather than a single cohesive piece of work, but if the instruction is "read the sequence" a workable sequence is what you should expect, not to get 5-6 posts in and wonder what the hell you've missed. It's certainly not coherent enough (and certainly doesn't start in the right place) to explain QM to anyone who hasn't already grasped it thoroughly enough to start pondering the philosophical implications completely independently - and if you're at that stage there is no point in reading what Yudkowsky has to say on the subject. The entire thing blurs together Yudkowsky's personal philosophies, rationalism, and experimental evidence behind quantum theory. As far as I'm concerned this is an absolute cardinal sin; you must separate fact from philosophy (although I gather that he's so convinced by it that his philosophy is fact) and distinguish what we all know and what you just think. As I said just above, give the facts then discuss it. The writing is pretty idiosyncratic too, in both the good and bad way. Particular, you really shouldn't use silly "fun" explanations of things and then hand-break turn into mathematics like he does. This might be a side-effect of them being badly ordered, though. Hopefully if he does put this into a book he'll clean it up to be readable as I'm really not surprised that people have trouble going through it. Scarlet A.pngmoral 14:19, 26 January 2011 (UTC)
Of course, I should clarify that I'm not saying Yudkowsky is wrong, just that his explanations here are so piss poor that I can't quite make that judgement yet. Scarlet A.pngmoral 14:38, 26 January 2011 (UTC)
I'm impressed. You guys have got a lot more patience than I have.--BobSpring is sprung! 14:41, 26 January 2011 (UTC)
Well, in fairness I'm probably being overly harsh. Since I haven't yet read all of it I don't know which posts will have the magical epiphany in it. That said, I can describe it by paraphrasing from a course feedback note regarding my PhD supervisor giving a lecture course: "He clearly knows what he's talking about, it's a shame no one else does". Scarlet A.pngmoral 15:06, 26 January 2011 (UTC)
As far as I can make out, what he's said is "decoherence in configuration space is what the equations say, and that simply constitutes many-worlds, not even 'therefore.'" I may have oversimplified. Any LW readers who've actually read the QM sequence care to correct me? - David Gerard (talk) 16:22, 26 January 2011 (UTC)
I am not a professional physicist but I do have a masters degree in physics. His analysis on configurations and amplitudes is just wrong. Here is my disproof of his explanation http://www.ex-parrot.com/~pete/quantum-wrong.html — Unsigned, by: 212.69.53.254 / talk / contribs
Linked - David Gerard (talk) 23:06, 10 March 2011 (UTC)

Crack discovered[edit]

[2] Check the post, compare to the comments - David Gerard (talk) 02:13, 26 January 2011 (UTC)

I'm not surprised. He really came on strong for MWI; this post looks like the culmination of the ones before it. I wish I could say more, but the QM sequence was one of the few I skipped. Tetronian you're clueless 04:39, 26 January 2011 (UTC)
Everyone skipped it. Low on votes (it's the only part of the Sequences where you'll find EY posts rated 0 or even -1), not many commenters, and those commenters pointing out how bloody stupid he's being, and that lone theorists railing against science is a reliable indicator of gibbering crackpottery. This was three years ago. I wonder how the rationalism book will look. The QM sequence is used as the basis for the odder bits of philosophy in a way that even most LessWrongers don't appear to examine - they seem to read the headlines and just go along with it. Yay rationalism! - David Gerard (talk) 10:19, 26 January 2011 (UTC)
Are he/they basically saying they can ignore science when they feel like it? Pegasus (talk) 12:32, 26 January 2011 (UTC)
No, it's more elaborate than that - Bayesian epistemology beats Science. With a capital S, that being the idealised version of Science he's thinking of rather than what scientists actually do (he actually said this). Based on him failing to get good results from his idealised version of Science when he was 18. I must note that EY is extremely smart and really very clueful in other areas, so when he's posting crack it's subtle and elaborate crack. Read the linked posts - David Gerard (talk) 12:43, 26 January 2011 (UTC)
Well, I had a really tl;dr rant, but then deleted it. If I see LW as pages and pages of boring crap I'm not going to dispute it with pages and pages of boring crap. However; "Facts without theory is trivia, theory without facts is bullshit". Scarlet A.pngmoral 13:09, 26 January 2011 (UTC)
Sayeth EY: "I figure that anyone who wants to paint me as a lunatic already has more than enough source material to misquote. Let them paint and be damned!" (from the comments of "Do Scientists Already Know This Stuff?") Tetronian you're clueless 13:48, 26 January 2011 (UTC)

Footnotes[edit]

*Snort*[edit]

This is just too funny. But not in the way the author intended. Tetronian you're clueless 19:34, 27 January 2011 (UTC)

That's not even words... Scarlet A.pngmoral 20:00, 27 January 2011 (UTC)
Typical LW jargon explosion. When the esoteric discussions like that begin, everyone completely ignores the fact that other people might want to read the thread and understand what's going on. But just the notion of "acausal sex" made me laugh. Tetronian you're clueless 00:55, 31 January 2011 (UTC)
Hmph. HonoreDB (talk) 08:52, 8 February 2011 (UTC)
Acausal conception - David Gerard (talk) 10:38, 8 February 2011 (UTC)

Crit[edit]

I thought I'd copy/paste this from here. I'd have liked to have seen it brought up on the talk page rather than on a different comment board with a sarcastic "hope you're reading this" tag (not to put too fine a point on it, but we have open editing on talk pages for precisely this reason), but since I'm clueless by proxy I suppose I wouldn't have understood... anyway, some of this does need addressed, I've always wondered about the "...Descartes, Spinoza or Leibniz voted down into oblivion" thing too. Scarlet A.pngmoral 12:00, 18 February 2011 (UTC)

I agree with pretty much all of what you've said, but I won't edit the article unless someone else agree because I'm definitely a biased person with respect to LW - I spend as much time there as I do here. Plus, I never understood why "trying to reinvent the sense of awe associated with religious experience in the name of rationalism" is bad. Tetronian you're clueless 19:34, 5 March 2011 (UTC)
I don't care if you have an interest in LW, if you can address the issues, you're welcome to. Scarlet A.pngmoral 19:47, 5 March 2011 (UTC)
I spend as much time there as here (and I'm going to the London meetup tomorrow - RWians who would be suitably amused by otherwise-smart people holding them personally responsible for the entire contents of RW, do come along!), which is why I feel free to edit this article. Hack away - David Gerard (talk) 19:57, 5 March 2011 (UTC)
Fair enough. And let me know how the meetup goes! I was going to go see EY speak at NYU this week but I couldn't make it. Tetronian you're clueless 20:00, 5 March 2011 (UTC)

Criticism (cont)[edit]

It looks like it turned awful since I've read it the last time:

This essay, while entertaining and useful, can be seen as Yudkowsky trying to reinvent the sense of awe associated with religious experience in the name of rationalism. It's even available in tract format.

The most fatal mistake of the entry in its current form seems to be that it does lump together all of Less Wrong and therefore does stereotype its members. So far this still seems to be a community blog with differing opinions. I got a Karma score of over 1700 and I have been criticizing the SIAI and Yudkowsky (in a fairly poor way).

I hope you people are reading this. I don't see why you draw a line between you and Less Wrong. This place is not an invite-only party.

LessWrong is dominated by Eliezer Yudkowsky, a research fellow for the Singularity Institute for Artificial Intelligence.

I don't think this is the case anymore. You can easily get Karma by criticizing him and the SIAI. Most of all new posts are not written by him anymore either.

Members of the Less Wrong community are expected to be on board with the singularitarian/transhumanist/cryonics bundle.

Nah!

If you indicate your disagreement with the local belief clusters without at least using their jargon, someone may helpfully suggest that "you should try reading the sequences" before you attempt to talk to them.

I don't think this is asked too much. As the FAQ states:

Why do you all agree on so much? Am I joining a cult?

We have a general community policy of not pretending to be open-minded on long-settled issues for the sake of not offending people. If we spent our time debating the basics, we would never get to the advanced stuff at all.

It's unclear whether Descartes, Spinoza or Leibniz would have lasted a day without being voted down into oblivion.

So? I don't see what this is supposed to prove.

Indeed, if anyone even hints at trying to claim to be a "rationalist" but doesn't write exactly what is expected, they're likely to be treated with contempt.

Provide some references here.

Some members of this "rationalist" movement literally believe in what amounts to a Hell that they will go to if they get artificial intelligence wrong in a particularly disastrous way.

I've been criticizing the subject matter and got upvoted for it, as you obviously know since you linked to my comments as reference. Further I never claimed that the topic is unproblematic or irrational but that I was fearing unreasonable consequences and that I have been in disagreement about how the content was handled. Yet I do not agree with your portrayal insofar that it is not something that fits a Wiki entry about Less Wrong. Because something sounds extreme and absurd it is not wrong. In theory there is nothing that makes the subject matter fallacious.

Yudkowsky has declared the many worlds interpretation of quantum physics is correct, despite the lack of testable predictions differing from the Copenhagen interpretation, and despite admittedly not being a physicist.

I haven't read the quantum physics sequence but by what I have glimpsed this is not the crucial point that distinguishes MWI from other interpretations. That's why people suggest one should read the material before criticizing it.

P.S. I'm curious if you know of a more intelligent and rational community than Less Wrong? I don't! Proclaiming that Less Wrong is more rational than most other communities isn't necessarily factually wrong.

Edit: "[...] by what I have glimpsed this is just wrong." now reads "[...] by what I have glimpsed this is not the crucial point that distinguishes MWI from other interpretations."


The above comment was unsigned and wrongly formatted. Anyway, I think I recognize the writer as LW user XiXiDu. Bo (talk) 18:28, 19 March 2011 (UTC)

It's not unsigned, I copy/pasted it from the LW forums. Scarlet A.pngmoral 22:27, 19 March 2011 (UTC)
Heh, sorry. At least I got the author right. Bo (talk) 12:24, 21 March 2011 (UTC)

Made it.[edit]

I have made it through the Sequences, and their comments. Most are reasonably comprehensible, the QM bit set off small alarm bells (as noted above) and I only really suffered terminal tl;dr at the last one, on decision theory.

But now LW posters seem to have pretty much expunged the phrase "You really should read the Sequences" from their collective vocabulary. Bah!

I have not converted to a rabid fanboy and have not found myself a high-paying job in the City for the single purpose of donating as much of my income as possible to SIAI, despite the detailed arguments raised repeatedly on LW in favour of such an action - David Gerard (talk) 16:01, 2 March 2011 (UTC)

I got half way through QM and decided to actually read a proper textbook on the subject instead. Scarlet A.pngmoral 18:34, 2 March 2011 (UTC)
@David: Obviously you are not smart enough. Go read them again.
I'm kidding, of course. I wasn't converted either. But the Sequences did introduce me to some really interesting reading material, namely the sources that EY rips off borrows from. Tetronian you're clueless 19:31, 5 March 2011 (UTC)
Careful there, EY might have to call you clueless again. Scarlet A.pngmoral 20:12, 5 March 2011 (UTC)
Heh heh. Being called clueless would be a social kiss of death over there. Overall, though, I think he is pretty much right about the rationality stuff (Bayes/cognitive science/language), but I still don't buy into MWI or FOOM. Tetronian you're clueless 20:26, 5 March 2011 (UTC)
I actually find the FOOM stuff (and the entire point of SIAI) reasonably plausible, but I comfort myself with the thought that I am neither bright enough nor interested enough to be any help with it whatsoever. The idea of "a community blog devoted to refining the art of human rationality" is a fantastic mission statement - it really is one of the best I've ever seen - and I am 100% behind that stated goal, and as long as that's on the masthead then LW will tend toward it. Probably - David Gerard (talk) 21:22, 5 March 2011 (UTC)
Today at work (on internal IRC #chat) I used the concepts of Bayesian probability, networks and epistemology to slap down someone claiming language was entirely a matter of opinion. And it was all Yudkowsky's fault - David Gerard (talk) 23:08, 10 March 2011 (UTC)
So what were you saying about being able to spot a LWian in a conversation, again? Scarlet A.pngmoral 18:55, 20 March 2011 (UTC)
I also told them to give all their money to SIAI I don't know what you're talking about - David Gerard (talk) 23:26, 20 March 2011 (UTC)

Errors in The Ugly[edit]

1. Did Eliezer ever say that the basilisk "could have caused incalculable harm to the future of humanity"? I thought he only said it could cause harm to it's readers. I tried to change this but was reverted.

1.1 In fact, Eliezer said that "Roko's original proposed basilisk is not and never was the problem in Roko's post.". I agree and think that Roko's post only makes real basilisks easier to accidentally invent, thus the risk of harm to readers.

2. The butthurt poster on the other hand explicitly admitted that he's trying to increase existential risk ("harm the future of humanity") to deter censorship. But not by "posting things on internet forums". See this post.

3. (an omission) Roko has since commented that he agrees that the post shouldn't be out there, and objects to the censorship being considered censorship instead of retraction.(1) (2).

Bo (talk) 12:39, 21 March 2011 (UTC)

"LessWorngians" the "rationality page" ROTFLMAO[edit]

That like that was given, way down the page, with "RationalWiki is only "rational" in quotation marks, unfortunately" has given me the biggest laugh of my day. My god what pompus idiots over there. I would hope it were a spoof / poe site, but no, they really do sound like a bunch of my first year Philo students, shoved deep into the idea that somehow saying things that are seemingly incompatible, in a Sartre way, and nodding with the Baudrillardian sense of "see, you cannot follow my gibberish cause i'm way above you and speak french" sense of philo authority. What a funny funny site. thanks for the laugh. (Not that I think we are necessarily truly rational or anything, I just find their comments amusing. "they cannot distinguish the woo from the truly bad woo". i do hope it's a poe site. --Sun mowse.pngEn attendant Godot 13:49, 21 June 2011 (UTC)

"Poe" does not mean "parody" or "satire", but the inability to discern parody or satire from a "straight" text. Signed, the PDNMPS club.--ZooGuard (talk) 14:57, 21 June 2011 (UTC)
OH, i've always thought it was just parody. my bad.--Sun mowse.pngEn attendant Godot 15:05, 21 June 2011 (UTC)
They're actually a really interesting bunch when you get to know them.And read thousands of words of LW material, which is an uphill battle but worth it. I believe David Gerard has even been to some LW meetups. Tetronian you're clueless 01:40, 22 June 2011 (UTC)
They're as stupid as any humans, but very smart and erudite about it. I treat LessWrong as Internet television. I'm now working on my grand synthesis of postmodernism and Bayesian epistemology. Last night I discovered that Derrida's notion of differance describes the process of Bayesian updating when learning a language - David Gerard (talk) 07:21, 22 June 2011 (UTC)

Harry Potter[edit]

Lord it's amusing and sad. Is the OP a troll? https://forums.darklordpotter.net/showthread.php?t=20061 --145.94.77.43 (talk) 23:09, 31 August 2011 (UTC)

Troll forum, best ignored under all circumstances - David Gerard (talk) 15:38, 1 September 2011 (UTC)

...into an Ivy League college?![edit]

(Disclosure: I am not an acolyte of Yudkowski's camp or anything. I am a doctoral student of philosophy who has become interested in AI and has found much of interest in Y's writings, particularly his long paper on 'Levels of Organization in Artificial Intelligence'.)

My suggestion is just this: If Y dominates LessWrong, and if he isn't affiliated with a university, then how is this line fitting?: 'waiting for us to get off the drugs and sex and follow them into an Ivy League college.' Is the idea that the reference to college is still just a metaphor? If so, I would suggest that the sentence is a failure: people will not generally take that to be metaphorical.

True. Fixed? - David Gerard (talk) 07:37, 12 October 2011 (UTC)
I don't think it was a direct reference to Yudkowsky. Indeed, the "Ivy League" is just an athletic thing, not academic. It's just that it's become short-hand for "this is a good college to go to" in much the same way that MIT says "this nerd from NASA knows what he's talking about" in films, despite the fact that MIT is just basically a university and nothing particularly magical (I'm trying to find the relevant TV Tropes article but it seems to have gone walkies). So yes, the reference is very metaphorical. ADK...I'll subpoena your electron! 11:36, 12 October 2011 (UTC)

Morality[edit]

I have been intrigued by the possibilities offered by some of the concepts and approaches of LW, but I wanted to see them in action at arriving at moral conclusions. Accordingly, I searched for their article/discussion on abortion. All I could find were a few tangential items, such as this, which labeled the conflict as inflammatory and came to no conclusions - just a back-and-forth series of discussions like you'd find anywhere. Then I looked for their articles or discussions on other moral topics, and still could find very little. In fact, it looks like 90% of LW is just discussions about methods of thinking, and almost no real application. Abstract theory, labels of different biases and methods, practical assessments of outcomes, etc.: there's a lot of that, and discussions of that. And discussions of those discussions and parsing of terms and so on. But there seems to be very little application of their methods/thinking to contentious issues like animal rights, justification of war, euthanasia, prostitution, drug legalization, etc.. The biggest exception is the question of the existence of God, but that's just opening up the need for moral discussions even more since there's near-universal LW agreement on atheism.
Now, I'm not very familiar with LW, so it's very possible I'm just looking in the right place. Or maybe my expectations are out of whack, and their community never intended to touch on morality. But it seems strange to me that such an elaborate system of thought, filled with jargon and common memes, could have developed without ever being systematically applied to moral issues beyond atheism and the ubiquitous AI and cryonics obsessions. A little help? Am I looking in the wrong place?--ADtalkModerator 03:30, 14 October 2011 (UTC)

Yeah, it's always bugged me what, exactly, this cryonics and AI "research" actually entails. It seems to be general futurist stuff; we'll sit and do the thinking about what happens when we get that tech, but the tech itself we'll leave to someone slaving away in some university basement for no money. ADK...I'll crinkle your gas tank! 06:47, 14 October 2011 (UTC)
Well, I don't know. My complaint is really just that their methods of rational thinking never seem to be usefully applied to anything but the most banal problems. I'm sure it's useful to decide a lot of things - one of the examples on their frontpage refers to determining if your doctor is competent - but I want to see conclusions on moral questions, too.--ADtalkModerator 02:32, 17 October 2011 (UTC)

Link in citation 20 is broken[edit]

Citation 20 is supposed to link to a google cache of the "basilisk" post. As of March 2012 its leading to an error page. — Unsigned, by: 88.104.222.58 / talk / contribs

X is a mindkiller meme, the AI xenophobia and hostility[edit]

Some rather curious meme i observed there: X (politics, what ever else) is a "mindkiller". Obviously the only people who can be "mindkilled" are the others, and therefore it is totally valid to flat out ignore anyone else and not bother to listen. Ditto applies for rest of rationality - it seems that their only practical use for rationality techniques is rationalization. The one popular topic is the AI risks, a topic that is constructed to evoke broad, fuzzy, far future thought with the abstract concepts such as Mankind, and to trigger xenophobia. Other topics, like politics, are a no-no as they would break the feeling of rational agreement, as they would provide any one with at least a few abundantly clear examples - towards which one is neutral - of how they are rationalizing rather than being rational, using what one might mistake for rational argument when it aligns with one's views. The end result is that rationalizations of some innate xenophobia are mistaken for rational high quality argument for really scary AI; this is dangerous because it inspires hostility towards genuine AI researchers. Dmytry (talk) 10:11, 29 March 2012 (UTC)

It's been noted before that "politics is the mindkiller" is the mindkiller. More of a complete clusterfuck is when these autodidactic geniuses decide they're brilliant, enlightened and nonconformist enough to even talk about IQ and race, as if that's a scientific topic rather than a social one. (In the real world, "IQ and race" almost always means "black Americans vs white Americans", and black Americans do not constitute a genetic group - Africa being where humans are from, it has the greatest human genetic diversity. Unless you're talking about inbred groups like Icelanders or Ashkenazim, human "race" is pretty much a cultural construct, not a genetic one. And then there's the easy 15-20 IQ point variance from purely social rather than genetic factors ...) I don't think they're intentionally racist, and they haven't attracted any actual neo-Nazis as yet, but it's still a punk swastika in essence and just as stupid. Ah, well-off libertarians and their unexamined privilege - David Gerard (talk) 10:23, 29 March 2012 (UTC)
Update: a "race realist" stopped by [3] and the LW community politely told him to fuck off and downvoted him into oblivion. Which is nice to see - David Gerard (talk) 19:16, 11 May 2012 (UTC)
And Luke is actually working on reining in the racist libertarians, which is good. A bit gently, but his problem is to actually convince a fuckhead they're wrong about something, and that's philosophically difficult - David Gerard (talk) 16:14, 22 May 2012 (UTC)
Funny: after this thread, I've posted that politics is a mindkiller is a mindkiller to see how much they like it, 6 dislikes and counting. [4] . They like their meme. Anyhow what pisses me off is that they don't get to see their own irrationality via the political debates, while they profess this irrationality on the select few topics where they all are irrational in same way due to selection bias. The end result is massive circlejerk on those issues. Dmytry (talk) 11:12, 30 March 2012 (UTC)
It seems to be a favorite cached thought. Of course, LW does not promote techno-libertarianism. No way, no sir, nuh-uh. All that is apolitical! Nebuchadnezzar (talk) 18:32, 21 April 2012 (UTC)
I will note, by the way, that LessWrong is still some of my favourite internet television, as it turns out I enjoy arguing philosophy with smart people. And it does actually remind me to try to be less stupid in daily life. Its defects are glaring, though - David Gerard (talk) 10:27, 29 March 2012 (UTC)
I think when you get too sucked into the AI side of things and start going off on their Scary Ideas, then you risk throwing a lot of good stuff out. In general there's good stuff on there, and it would be a shame to, as they say, throw the baby out with the bathwater. At the very worst, locking Elizer Yudkowsky in a room with Robin Hanson and throwing away the key would be the single most hilarious thing one could ever do. Scarlet A.pnggnostic 10:52, 29 March 2012 (UTC)
Could they talk their way out over a text terminal, though? Their new Center for Modern Rationality thing - basic "how not to be stupid", not the AI stuff - may not suck - David Gerard (talk) 12:45, 29 March 2012 (UTC)
Well as a programmer, the AI shit is of professional concern to me. Just wait another 10 years. There's downright creepy shit [5] Dmytry (talk) 16:27, 29 March 2012 (UTC)

Do they ever use knowledge of biases for anything but rationalization and self pleasure?[edit]

Consider their example . How can the list of biases ever help you in advance to invest wisely? You need not list of biases, but high grade reasoning, like statistical analysis of the stock data, as in, sitting down and writing a program that analyses the stock data. You can also go with training a neural network to do this, using network in your skull - you sit through the training dataset, make predictions, reward yourself for matches, then try other dataset, and so on, and try to keep mind clear of verbal nonsense like second guessing what biases are there. And above all, you must not cherry pick the dataset that you are training yourself on, such as picking 3 years on one fund on which heuristics fail. It seems to me that the only way in which list of biases is increasing utility, is the positive feelings you get thinking of how wrong everyone else is. (Note that the list of biases is interesting from sociological perspective, but that requires quantitative, not qualitative data). The blacklisting of biases won't make flawed thought any less flawed. I deal a lot with a form of signal processing - computer graphics. Often you can only attain very low accuracy in specific time if you use ideal method - then you introduce 'biases' to produce consistent results between frames, or otherwise to improve the output. Removing biases here does not improve the output; in the ideal case the biases will be replaced with randomness. Likewise with the thought, the only solution is thinking harder and better. I do not know what goes on inside typical LessWronger head, but inside mine, and I think most people's, there is no struggle between some 'biases' and high grade reasoning or the well trained expertise (ala that neural network training above), whenever high grade reasoning is possible for the problem at hand. Dmytry (talk) 07:57, 21 April 2012 (UTC)

Read any Nassim Taleb before? Scarlet A.pngpathetic 15:36, 21 April 2012 (UTC)
No... dunno how related, he seem to say that the statistical methods suck for market prediction, which is kinda obviously the case with them all doing those methods. I would rather use neural network and read shit like news. In any case I don't see how exactly knowing of some supposed list of biases helps anyone reason; if you remove particular kind of structured way to err, you won't suddenly become more accurate in any useful way, you'll just be wrong in nonstandard way, usually more wrong but less debunked wrong. Like, rather than investing into crank free energy perpetual motion device, you'd donate to some high school drop-out who never produced anything complicated, to work on friendlier AI than anyone else's AI edit: I blogged about that even. Other comments here also noted that lesswrong seem to never actually be able to apply rationality to something. It seems to me that this biases listing is literally never useful as they just proclaim they don't have biases. Dmytry (talk) 09:27, 22 April 2012 (UTC)
This is hilarious. Actual AI researcher to SIAI: "I'm trying to be polite here, but you guys are cranks." - David Gerard (talk) 09:01, 23 April 2012 (UTC)
Hehe. Yea, "The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it.", perfectly summarizes my initial state before me coming across the case of some guy wanting to make a lot of money and donate them to SIAI (wtf, scam for money?!), then coming across the fab sabotage post (WTF, self scam for money and fuzzy moral utilons?!). I don't go on Steorn or Rossi energy catalyzer or what ever else related forums and argue. That's because they don't post anything even remotely resembling plans to attack conventional power plants. (not under any interpretation) Dmytry (talk) 13:10, 23 April 2012 (UTC)

A criticism they can't ignore, from a charity directory: "You guys are incompetent, and your mission is best helped by not giving you money." Though to be fair, Luke is taking it seriously - David Gerard (talk) 13:30, 11 May 2012 (UTC)

Well, from the very head of this post it would seem that they take it seriously that there exist such a statement, and perhaps take it seriously that the strings they produce result in generation of such statements, but do not take seriously the possibility that they might be lacking something when it comes to actually knowing what the AI would do, and otherwise reasoning about AI. Actually, I've read some more about Luke. He's a sad case. He pursued theology till age of 22, and that leaves one with so many bad habits of thought, there's probably not much hope for him to do anything other than theology on AI. Also, someone linked me the taxonomy of oracle AI http://lesswrong.com/lw/any/a_taxonomy_of_oracle_ais/ , if you think I had poor opinion of them before, now the opinion is at rock bottom of 'they are technobabbling and damn it i fell for it and did read sense into some of babbling' . Now (after reading about theology) I just feel bad for Luke (I used to think the BS making was somewhat more free willed). Dmytry (talk) 10:03, 14 May 2012 (UTC)
At least he didn't join Scientology - David Gerard (talk) 12:14, 14 May 2012 (UTC)
wow. Well, I'd say, he gone over various replacements for his theism, and picked the one that latched onto his psyche the worst. Also, it is rather depressing to see how awareness of own reasoning problem ( http://commonsenseatheism.com/?p=16256 ) doesn't translate into practical solution (staying the hell away from people who promise you grand purpose). I guess it's like awareness of own internet procrastination doesn't magically translate into doing something productive. I still think there has to be some over inflated self esteem though, to see it as a great loss to society if you were to stay away from stuff that screws you over. Dmytry (talk) 12:35, 16 May 2012 (UTC)
To answer the question in your heading, it does appear that Luke applies the skills to basic arse-enablement. c.f. [6] Nonprofit Kit for Dummies sounds like the book that many charities I've been personally involved with spend years floundering, then basically writing for themselves. I wonder if the RWF has a copy - David Gerard (talk) 15:28, 14 May 2012 (UTC)
Hmm mmm. Well, rational money-getting for irrational/dumb research is bad. Also, basic honesty demands more work on trying to avoid accidentally wasting other people's money.Dmytry (talk) 12:35, 16 May 2012 (UTC)

Something else: Who is Michael Vassar? I see work in 'personalized medicine' and 'music licensing'.

Used to have Luke's job - David Gerard (talk) 12:14, 14 May 2012 (UTC)

LessWrong on RationalWiki on LessWrong[edit]

Part n+1. Also, LessWrong on Neil deGrasse Tyson on cryonics. Though if you are familiar with their existing views, both won't bring you anything new.--ZooGuard (talk) 17:30, 10 May 2012 (UTC)

The latter is pretty depressing reading, but the former I don't think I even want to bother with. Scarlet A.pngmoral 22:51, 10 May 2012 (UTC)
Wait, so this subsection would be "RationalWiki on LessWrong on RationalWiki on LessWrong", right? Shitsticks. Scarlet A.pnggnostic 13:47, 11 May 2012 (UTC)
Wait until I tell them about this on LessWrong. --√2 (talk) 13:14, 14 May 2012 (UTC)
"LessWrong on RationalWiki on LessWrong on RationalWiki on LessWrong" would make even Douglas Hofstadter spin. Scarlet A.pngbomination 18:37, 18 May 2012 (UTC)

Why bother reading LessWrong?[edit]

LessWrong is supposedly about how to avoid biases and being a better thinker (when not engaged in Yudkowsky worship and having nightmares about being tortured by an AI). But... to get hold of that information, why not read a standard science-based textbook on the subject? Instead of the blogathon of a random Internet kook? Can someone please explain this to me?

Because the central sequence is kinda interesting, is where Yudkowsky knows what he's on about, and in contrast to most conventional philosophy texts on the subject, can be read by a human being - from then on it's a soap opera about who can most selectively apply it. Scarlet A.pngpostate 17:30, 18 May 2012 (UTC)
I don't really go there much, but from what I understand nobody very new demands the 'you need to read the stuff,' only people already firmly grounded in the community. Meaning, they may go there because they have a sense of togetherness or community with like minds, rather than the tenuous and barely-coexisting cordial relationships we all have with each other in this fine establishment. Not to say their way is better; there seems to be an awful lot of parroting. I'll take vitriolic head-butting over sycophantic head-nodding any day of the week. ±Knightoftldrsig.pngKnightOfTL;DRcritical thinking is the key to success! 17:37, 18 May 2012 (UTC)

What I meant is, why should anyone rely on a random Internet person rather than standard scientific texts on the subject of biases and brain deception? Yudkowsky spends an awful lot of time inventing new words for already existing concepts, throwing in fringe beliefs (singularitarianism, cryonics etc) in the mix.--Baloney Detection (talk) 18:10, 18 May 2012 (UTC)

Hello, welcome to the internet! Why should anyone read any blog by pretty much any blogger? why read PZ Meyer's views on say, feminism, or God? He's a scientist, not a philospher. Hell, why read Wikipedia's description of say, Roe V. Wade, when you can go out and actually read teh bill yourself? --Green mowse.pngGodot 18:15, 18 May 2012 (UTC)
Myers as a scientist could well have worthwhile points about religion, given that it is something that often interferes with science. His writings about feminism ae his personal opinions, and I doubt anyone treat him as an expert on the subject. Yudkowsky on the other hand is treated as an expert on his subject at LW.--Baloney Detection (talk) 18:20, 18 May 2012 (UTC)
"Yudkowsky spends an awful lot of time inventing new words for already existing concepts..." As someone with a cognitive science background, that's one of the various things that irritates me about the site. Quite a few times, I've had to click back through a whole series of posts to figure out what some cutesy term means only to find out that they're talking about some basic concept and I sit there thinking "Oh, that's what you mean. Why the hell didn't you say so to begin with?!" I don't expect it to be a technical discussion on the level of a peer-reviewed paper or anything, but use of actual terminology at some point would be nice. I imagine anyone who is led to become interested in cognitive science by LW will have to spend a bunch of time unlearning the local jargon. Nebuchadnezzar (talk) 10:04, 19 May 2012 (UTC)
If you have any links to such examples, please consider adding them in the entry. "Requiredism", wtf? This is the kind of stuff I meant in this topic. If one wants to learn something about what LW writes, why not just read a mainstream science/university textbook on it?--Baloney Detection (talk) 18:31, 19 May 2012 (UTC)

The AI stuff (especially the Roko bit) and other nonsense is empirical test of 'does it work?'. Going onto lesswrong to be less wrong is like going to scientologists to be more scientific. They are mentally masturbating, not thinking, and this is the collection of their porn. The stuff about AI is just mumbo jumbo, with no actual sense, can be charitably interpreted as ideas.

new additions[edit]

The additions that are being reversed should really not be part of a quality article. They are not only irrelevant poc-shots at particular people on the site, they are weasel. "some are", but i suspect others are not. Finally, there is no real argument made that people who are from "serious religious backgrounds" would be more or less likely to be rational or irrational. --Green mowse.pngGodot 18:00, 18 May 2012 (UTC)

Hell, some of the least sensible people I know are hardened atheists. But more generally, yes, that's just stupid point-scoring and if it was left in, it would certainly be a "why do you call it RATIONAL wiki" moment, because it's just nonsense. Scarlet A.pnggnostic 18:15, 18 May 2012 (UTC)

Fair enough, but I think it would be useful with a list of some of their nuttier beliefs. Otherwise the article will just look like a case of Judean People's Front vs People's Front of Judea. Also, the similarities between religion and singularitarianism ARE rather striking, and since the site is operated by the Singularity Institute, it could be worth pointing out.--Baloney Detection (talk) 18:18, 18 May 2012 (UTC)

@Adk, oh, you mean Hitch, Dawkins and Harris. ;-) j/k. Baloney, nothing's wrong with pointing out things you find to be nutty. I personally find that hive site to be a bunch of men playing mental masturbation games. but I'm probably way too biased to put any of that in there. The problem is when you just make large shots in the "name" of criticism. Just be specific and give valid reasons why it's a criticism, not just (like my view) "I don't like them."--Green mowse.pngGodot 18:20, 18 May 2012 (UTC)
I was thinking "know personally", but since you've mentioned Harris... anyway, I can see the intellectual masturbation, but I don't see it as extensive and as damaging as you might imagine. There are interesting ideas, and then there are ideas that are well-presented, and then there's Robin Hanson occasionally cropping up in the comments to tell people to stop the personality cult thing. Simply put, I'm loathed to be too critical of LW because there's nothing there that I couldn't also say about RW in some form or another. Scarlet A.pngsshole 18:36, 18 May 2012 (UTC)
There is a reason I said my bias keeps me from editing. I am very aware that I often find things I don't like about X or Y, and it colors my ability to see "clearly", if you will. The entire group is very dismissive, very "if you don't think this way, you are thinking wrong", and very "I'm right because i said so - but here's some stuff that I claim proves what I say". That position, and the sense that they discount anything not deamed "logical" (again, logic is a mathematical process that has nothing to do with the validity of what goes INTO the process. An incorrect statement, or one that is a guess can still be placed into a "logical" formula and you can say "I logically have show Y". or, simply, truth value and logic are not necessarily requirements for the other. When I have talked with them about language, for example, and how thoughts are created (an area we have no idea about, by the way, despite some of their claims to the contrary), they dismiss my position as "gut instinct" (it is), and therefor "discounted". anyhow.... I don't edit. ;-)Green mowse.pngGodot 19:17, 18 May 2012 (UTC)
Ok I see... Btw, how do LessWrongians view ordinary scientists and science? From what I can find at their site, they don't like academia and the "medical establishment" due to its position on cryonics. They are also have a rather unorthodox view of what science is, but they don't appear to be directly anti-science. Is Yudkowsky the kind of person one could imagine to be invited to a scientific conference or as a speaker at TAM? You mention Hitch, Dawkins and Harris. Do you think Yudkowsky is more or less rational than these individuals?--Baloney Detection (talk) 21:04, 18 May 2012 (UTC)
I am not really sure about the day-to-day specifics, as I only lurk very occasionally, but I do not hold a high level of respect for any group that not only rejects but condescendingly dismisses testing and research (part of their own creed, no less; just read Harry Potter and the Methods of Rationality to see that if you don't want to browse the blog posts...) in order to preserve a fantasy. Whenever I went there, I got the feeling it was a lot of people lashing out at others (especially professionals) who told them they couldn't just freeze themselves and wake up in a golden fantasy future where everyone agrees with their ideas, there are no more stupid people, and everyone has hover boards. I don't know who made all of them experts, anyway. Rational thinking and logic are great and all, but considering oneself better than ordinary people is not the same as actually getting a specific education in a highly complicated field... So while I think they aren't a bad establishment, they certainly have some problems that (to me at least) make them look like raving egotistical fools from time to time. ±Knightoftldrsig.pngKnightOfTL;DRcritical thinking is the key to success! 22:06, 18 May 2012 (UTC)
The issue seems to stem from the confusion that you can rationalise your way out of any problem - despite a lot of philosophy of science pointing out that this is what the Greeks thought and the Greeks got nowhere with it. Remember Feynman: "If it disagrees with experiment it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name is—if it disagrees with experiment it is wrong." This is why I won't let myself get sucked into it too deeply; you don't really need it. Within what Feynman said is all you need to be successful in the world because that's the basis of skeptical empricism. But the LessWrong problem is also a generic problem within most futurism. Whether it be cryonics or nanotechnology, if you gather enough people who have their main focus as computer science, mathematics and theoretical physics they'll feel they can say anything. I mean, just look at the mess of the simulation argument article that this BON recently made and all of the supposed "evidence". Yet it isn't, not from a core skeptical empiricist point of view. The whole "thinking about thinking" thing, however, I'm perfectly fine with. But it seems like there's only so far you can discuss that before you find yourself, once again, on your own in a world we're not really evolved enough to understand - and that's where the problems are. Scarlet A.pngtheist 23:12, 18 May 2012 (UTC)
I was always skeptical of the naive application of Bayes' theorem they seem to propose. Setting probability priors in real life is often awfully hard to do. In science, you still have to carry out the experminent, you can't simply rely on probability theory. Declaring Bayesian probability theory superior to science is of course ridiculous. To think about thinking is all fine, but LessWrongians are hardly the only people doing that.--Baloney Detection (talk) 08:33, 19 May 2012 (UTC)
"...if you gather enough people who have their main focus as computer science, mathematics and theoretical physics they'll feel they can say anything." Obligatory. Nebuchadnezzar (talk) 09:35, 19 May 2012 (UTC)
it is even worse when someone who thinks he is a physicist (but only learnt to imitate handwaving, and never knew of any math) encounters a subject. Dmytry (talk) 10:29, 19 May 2012 (UTC)
LessWrong is mostly okay and has some very smart stuff, and almost all of their problems stem from their narrowcasting demographic. They are a large group of unusually homogeneous people who actively perpetuate their homogeneity. This has led to some minor flaws being exaggerated (cryonics and AI and such are very cool, and LW's hopeful focus on such things can tinge on willful blindness) and some useful tools being fetishized (many conversations become thick with jargon as concern over process actually starts to trump legitimate truth-seeking).
I don't believe so. LW is not rightheaded. The AI shit is utter nonsense. It is cool in the way in which cold fusion is cool, but with even less sense than any cold fusion community. The theories in question are as right headed as a bunch of folks doing physics by verbal analogy and equating vague ideas with theories. They're more like scientology. The AI shit furthermore is the central core as it is all run on donations for 'AI research' which is basically not different from any 'mmorpg guy' making ideas about mmorpg and thinking people should join him for ideas (very common sight on game programming forums), which sadly in that case due to coolness of AI did result in some people joining. Dmytry (talk) 14:29, 19 May 2012 (UTC)
Some LWers have essentially said that they see RW as useful but sloppy and lowbrow, and they in turn are correct in some respects. We are a very diverse community, which makes us less capable of specialized dialectics when compared to LW. We are irreverent of process in favor of perceived truth, which can be crudely effective or may be just crude. And so on. I am not drawing a squishy let's-all-be-friends false equivalency: I really do believe our two communities are just very different, but both right-headed and working to the same end.--ADtalkModerator 11:39, 19 May 2012 (UTC)
Are you guys anti science woo or are you guys just anti something that Great People Told Is Woo? Because LW is built around obscure science woo similar to Bogdanov affair paper (technical sounding gibberish), but about AI. Dmytry (talk) 14:29, 19 May 2012 (UTC)
I wouldn't compare LW with RW, I'd rather compare it with the broader science/skepticism community. I'm not sure those communities work to the same end. LW is rather dimissing of science when it suits them (condemning it as "traditional rationality") as well as having a poor understanding of it (no, Bayes' theorem isn't opposed to science). The latter community on the other hand tries to (among other things) improve the public's scientific literacy. Not to mention the techno-woo LW promotes (singularity, cryonics etc). LessWrongians also have their distinct jargon for already existing concepts (as it says in the article, compabilitism is called "requiredism"), which I guess is intended to make their stuff look more novel than it actually is.--Baloney Detection (talk) 12:38, 19 May 2012 (UTC)
exactly, but don't forget also the 'singularity institute' behind it all, a crackpot research organization that confuses verbal mental masturbation with theories and work on AI. This is basically new scientology in the making. Dmytry (talk) 14:29, 19 May 2012 (UTC)
It is a crackpot organization indeed, and from what I understand it more or less owes its continued existence due to being on Peter Thiel's payroll. Scientology is a good analogy. I myself had thought of them as a sort of techno-Objectivists, belief in the virtues of heroic capitalists is now replaced by belief in the virtues of heroic rationalists, or rather one single heroic ratioalist, Eliezer Yudkowsky, who is supposedly going to save the world.--Baloney Detection (talk) 18:43, 19 May 2012 (UTC)

────────────────────────────────────────────────────────────────────────────────────────────────────LW is like the stereotype of the AI programmer taken to a cartoonish extreme. I've worked with these types, though LW is like them multiplied 1,000 times over. The crankery and techno-woo they promote is predictable as a natural outgrowth of theories employed by AI, they've just pushed them far beyond their boundaries. It's hyper-computationalism merged with strong (reallly fucking strong) multiple realizability and a sort of semi-coherent reductionism (ironic, because multiple realizability was formulated as an anti-reductionist argument in philosophy of mind). Nebuchadnezzar (talk) 15:34, 19 May 2012 (UTC)

That is a good way to put it. The most ironic thing is that EY constantly goes on how those working on AI now are lunatics, even though he is the absolute caricature extreme of lunatic 'working' on AI. Most recent example: http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/6m81 . The lack of self awareness is amazing. Same applies, to huge extent, to many others at LW, though (especially how the biases are only in others). And yes btw, viewing all AIs as maximizers of real world so called 'utility function' (a function of what?) is sheer lunacy. I think the idea is to take irreducible things that philosophers think we possess, and then assume that both reductionism is true and we possess those things in the most extremely irreducible form. Paperclip maximizer, eh, what is number of paperclips in the world and why would i want to answer this question when i can just have the automated solution-searching software make designs of better computers. Dmytry (talk) 22:57, 19 May 2012 (UTC)
Could it be said that LW is a demonstration of the Dunning-Kruger effect?--Baloney Detection (talk) 07:39, 20 May 2012 (UTC)
Maybe a group version of this effect. Basically the notion is that being 'dedicated to the art of human rationality'. you are automatically more rational than people whom aren't so dedicated. Reverse causality Dunning-Kruger: the more rational you think you are, the less rational you are (E.g. science would conduct double blind experiment and demand replication, those believing themselves to be rational wouldn't need double blind experiment)
Not really on topic, but all of it seems very silly to me. I interned at a robotics company and had quite a few brushes with functional AI (as in, an AI that takes input from a robot that then figures out how to do a task with the actuators it has been given such as its locomotion and its manipulators; there was a whole guy devoted just to making grabbers and 'hands') and while I am on NDA for the specifics, it is laughable that they treat future AIs as some kind of amorphous hyper-intelligent brain-like think machine. As it is now, people have to program AIs and robots with algorithms to do things as simple as sweep an area in the most efficient way possible and perhaps slow down when they reach a wall so they don't smash headlong into it. They don't 'think' in any way that could ever be compared with LW's fantasy AIs.
They have serious problems taking in and 'understanding' input, if they do at all: enormous amounts of work go in to processing any kind of input from the outside world. 'Vision' has to be captured by camera and then meticulously deciphered, and even then only concepts such as 'object' in perhaps a three-dimensional plane can be extracted and mapped. Something as simple as looking at a mug and then comprehending 'mug' is immensely more complex for an AI; the object we can easily recognize as a mug from nearly any angle may need a very clear shot of the handle and the rim for an AI to recognize it. From an obscured view, it may register simply as 'object' or 'cylinder.' Recognizing noises is even harder; while distance and position of noises may be easy to calculate, the tiny variations in an infinity of banging and scraping sounds that humans know instinctively is almost unfathomable to AI. Each one is unique, and processing/matching them each time is a difficult task that requires the audio sensor to finish the whole sound and a program to run comparing the sound to hundreds. However useful this is to say, even a search and rescue unit, it's simply not feasible.
Most robots have very limited, specialized capability, complete with sensors designed for their specific tasks, and their AIs are designed around the limitations of their sensors. Even if someone created an AI that had the same potential of a human brain to connect concepts and understand information (as simple as knowing a mug is a mug at a glance from any angle, with a mug of any shape or size), it would have to be able to observe and understand the world around it before it could ever truly even take a first, lurching step in the direction of LW's omniscient hyperintelligent synthetic intelligence. It can't just process say, every word written on the internet. Only with experience and context can humans understand those words fully, and an AI would not have the same capability as us to gather that context and experience.
Almost all AIs right now take their input from humans or are designed to gather input and information from a specific environment a human has designed them for. For one to be truly self aware, it would need to be able to gather input of its own choosing. Which, with our tools we have now (camera and processing technology, sensor and manipulator technology) have fundamental problems preventing them from ever reaching equivalency with even just human anatomy. To be fair, though, AI can have senses that humans cannot have. For example, a robot (not even an AI, just a robot) can know exactly, down to the centimeter how far it's traveled, and gather data on all sorts of things invisible to the human eye, such as exact temperature, waves outside of the visible spectrum, and the vibrations in the environment.
But creating an amorphous, unspecialized, magical thinkbot LW envisions is not practical for humans to do, nor if we tried, would it 'think' in the way that LW envisions. I don't claim to be an expert on AI or the recent advancements or where it will go in the future, but even just from reading the technical documents from this stuff I just can't see how LW thinks that we're rapidly approaching a singluarity. And if there is such a singularity, they certainly aren't helping us get there with their so-called massive intellects by sitting around and not helping the engineers, programmers, and other people involved to develop their treasured super-technology. But I suspect that if they did, they would give up their goofiness and just appreciate the amazing and incredible progress that we are making now, rather than mythical, magical progress that supposedly we will someday make. ±Knightoftldrsig.pngKnightOfTL;DRwalls of text while-u-wait 13:31, 21 May 2012 (UTC)
Not to mention that as it is right now, AI is a supplemental technology for functional robots. Many robots with some amount of AI that are for dynamic (rather than repetitive) tasks, ave a human on the other end to guide them. It's not the AI's job to make decisions or to drive the robot during fine tasks, but to calculate paths, handle manipulation, maneuver down dangerous conditions, and evaluate input data. People create AI and robots not to watch it sit there and think for them, but to take care of tasks that are too dangerous or too difficult for humans to do on their own. They are useful to us. A thinkbot that can somehow answer all the questions and create a new fabulous utopia (or destroy the world, take your pick) needs a function that it would be developed for. And 'ruling the world' isn't a function we would want an AI to take. An AI isn't going to somehow become more sophisticated such that one designed for data storage and processing (for example) is going to somehow mutate into a hyperintelligent digital omniscience. People design them for specific tasks, and I somehow doubt that something that complex would arise from the ineptitude of someone failing to make that task-based software. You're lightyears and lightyears more likely to get software that just doesn't work than an accidental, evolving AI... ±Knightoftldrsig.pngKnightOfTL;DRmore at 11 13:41, 21 May 2012 (UTC)
Thanks for the contribution to the discussion! I was feeling very one sided on this, I did some work related to image recognition (though I mostly work on the opposite, i.e. image rendering) and music processing, and even on 'agents' as in, game AIs, and was feeling basically the same way about this issue. Also, on the philosophical rambling on how to prevent 'intelligence' from thinking along the lines of not killing mankind... much much harder problem is making software not explore useless paths in solution trees, because there's not enough computing time in the entire universe to process all of them. Dmytry (talk) 18:15, 21 May 2012 (UTC)

Roko/"Forbidden Post"[edit]

Having just read this article for the first time (and never read anything at LessWrong), the bit about Roko stands out weirdly. Why is the reader being told all this? It reads as a blow-by-blow account of something that made somebody LANCB, while not saying when this occurred or what its lasting significance was, if any. Surely it could be summarised in a couple of sentences, if it's even worth mentioning at all. ΨΣΔξΣΓΩΙÐWeaselly.jpgMethinks it is a Weasel 10:25, 19 May 2012 (UTC)

My understanding is that the issue is that they are substantially insane, and it is probably in reader's interest to know that they are substantially insane, but any short summary of such would look non neutral and it would feel like potentially being biased to post such summary. So, instead, one instance where they reinvented hell is described in undue detail. Short story: crank superior 'theory' of decision making is one of the local tropes. Future superintelligent AI follows crank superior theory because it is superior, and decides to torture people whom sinned against the AI, in form of not putting their all effort towards its coming. More technically: the crank decision "theory" was made to one-box on Newcomb's Paradox, and torture correspond to choosing one box even if both boxes are transparent and you could see that one box you are taking is, in fact, empty (nothing to gain from torture). It does make sense in a perverted way that if you are buying into this you will help to build the AI that would have tortured you if it could torture you after you didn't help it and it didn't exist. Dmytry (talk) 11:02, 19 May 2012 (UTC)
I believe the lengthy exposition of the "basilisk" comes from the fact that even mentioning it is verboten over there. It could be condensed somewhat, I agree.--ADtalkModerator 11:25, 19 May 2012 (UTC)
Roko's basilisk is certainly worth mentioning. Apart from being plain funny, it highlights some of the whackier beliefs of LW. And highlighting woo is what this Wiki is (partially) about.--Baloney Detection (talk) 12:42, 19 May 2012 (UTC)

Richard Loosemore[edit]

Who is he? ( http://lesswrong.com/user/Richard_Loosemore/ http://www.richardloosemore.com/bio ) What I can find is this: http://www.sl4.org/archive/0608/15895.html , banned for some disagreement about "conjunction fallacy". From even skimming the wikipedia article on conjunction fallacy http://en.wikipedia.org/wiki/Conjunction_fallacy you can see that it is a misnomer: people do answer it correctly when presented in 'how many people out of 100' form instead of 'how probable it is', likewise people don't expect 6 coin tosses to give FHHFHF to be more common than sequence of 3 coin tosses FHH, so it is clear this fallacy is not robustly a feature of conjunction itself. It can be more likely that "a bank teller active in the feminist movement" is Linda, than it is likely that "a bank teller" is Linda, and people often use 'is' the wrong way in language. People also use the questions themselves as evidence, when you are asked of probability of a very detailed scenario, assuming non manipulative circumstances the scenario did come from some detailed analysis. (Ironically the best counter example and demonstration that there's something wrong with the way people do conjunction would be the AI shit at LW, which is an enormous conjunction in which they are nonetheless pretty sure). Dmytry (talk) 12:28, 24 May 2012 (UTC)

Never heard of him, though he seem to be pretty in to the standard pablum of LW woo (singularitarianism). I guess his crime was not to conform enough to the LessWrong phyg/cult standards.--Baloney Detection (talk) 22:58, 25 May 2012 (UTC)