We are 100% user-supported!
Without you, there is no RationalWiki!
| Goal: $5000 | Donations so far: $3075 | ||
| |||
Help and donate today!
LessWrong
(→See also) |
David Gerard (Talk | contribs) (→The ugly) |
||
| Line 24: | Line 24: | ||
Less Wrong does have its ugly side. Almost claiming ownership of the word "rationalism" has been a contentious issue as one cannot say that they're rational unless they're fully on board with [[Bayesian]] thinking amongst other things. It's unclear whether Descartes, Spinoza or Leibniz would have lasted a day without being voted down into oblivion. Indeed, if anyone even hints at trying to claim to be a "rationalist" but doesn't write exactly what is expected, they're likely to be treated with contempt - as criticism of [[RationalWiki]] and the [[Richard Dawkins]] forum seem to attest to. | Less Wrong does have its ugly side. Almost claiming ownership of the word "rationalism" has been a contentious issue as one cannot say that they're rational unless they're fully on board with [[Bayesian]] thinking amongst other things. It's unclear whether Descartes, Spinoza or Leibniz would have lasted a day without being voted down into oblivion. Indeed, if anyone even hints at trying to claim to be a "rationalist" but doesn't write exactly what is expected, they're likely to be treated with contempt - as criticism of [[RationalWiki]] and the [[Richard Dawkins]] forum seem to attest to. | ||
| − | Within the more esoteric discussions of artificial intelligence, there is an idea that is so dangerous, so utterly [[Cthulhu]]ian in nature that it needs to be censored from Less Wrong for everyone's safety.<ref>It's not, however, because it was on the web long enough for Google to stash a copy [http://pastebin.com/R6pkvHBF here].</ref> Long-term contributor Roko left Less Wrong altogether after posting it, deleting all his contributions.<ref>http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2c46?c=1</ref> According to Yudkowsky and his supporters, simply knowing this idea makes it more likely to become true in the real world, leading to an artificial intelligence harming humanity; as such, they keep us safe by deleting any posts with this one dangerous idea. This is the actual reasoning behind the censorship, suggesting that Yudkowsky may be tap dancing on the fine line between visionary and madman.<ref>http://lesswrong.com/lw/2i4/contrived_infinitetorture_scenarios_july_2010/2bh5?c=1#2bsq</ref><ref>http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2c6y?c=1</ref> Occasionally a poster will complain about the idea being deleted,<ref>http://lesswrong.com/lw/2i4/contrived_infinitetorture_scenarios_july_2010/2bh5?c=1</ref> rather than, ''e.g.'', writing it up as [[Old Testament|science fiction]].<ref>http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2dmh?c=1</ref> | + | Within the more esoteric discussions of artificial intelligence, there is an idea that is so dangerous, so utterly [[Cthulhu]]ian in nature that it needs to be censored from Less Wrong for everyone's safety.<ref>It's not, however, because it was on the web long enough for Google to stash a copy [http://pastebin.com/R6pkvHBF here].</ref> Long-term contributor Roko left Less Wrong altogether after posting it, deleting all his contributions.<ref>http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2c46?c=1</ref> According to Yudkowsky and his supporters, simply knowing this idea makes it more likely to become true in the real world, leading to an artificial intelligence harming humanity; as such, they keep us safe by deleting any posts with this one dangerous idea. This is the actual reasoning behind the censorship, suggesting that Yudkowsky may be tap dancing on the fine line between visionary and madman.<ref>http://lesswrong.com/lw/2i4/contrived_infinitetorture_scenarios_july_2010/2bh5?c=1#2bsq</ref><ref>http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2c6y?c=1</ref> Some LessWrong posters actually had nightmares about this — as members of a "rationalist" movement that literally believes in a Hell if you get life wrong.<ref>http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2dg1?c=1</ref> Occasionally a poster will complain about the idea being deleted,<ref>http://lesswrong.com/lw/2i4/contrived_infinitetorture_scenarios_july_2010/2bh5?c=1</ref> rather than, ''e.g.'', writing it up as [[Old Testament|science fiction]].<ref>http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2dmh?c=1</ref> |
In the outside world, the ugly manifests itself as LessWrong acolytes, minds freshly blown, metastasising to other sites, bringing the Good News for Modern Rationalists, without clearing their local jargon cache. | In the outside world, the ugly manifests itself as LessWrong acolytes, minds freshly blown, metastasising to other sites, bringing the Good News for Modern Rationalists, without clearing their local jargon cache. | ||
Revision as of 10:12, 9 October 2010
LessWrong is, in its own words, a community blog "devoted to refining the art of human rationality." To the uninitiated, it is basically a blog where scientists have rather dry chats about Bayesianism, artificial intelligence, cognitive biases, and what "rationalists" should and shouldn't do. Think of it as the teetotalling and disapproving older brother of RationalWiki, waiting for us to get off the drugs and sex and follow them into an Ivy League college. Like all good things, it makes for an entertaining read, in moderation.
LessWrong is dominated by Eliezer Yudkowsky, a research fellow for the Singularity Institute for Artificial Intelligence. Yudkowsky generally blogs about cognitive science, something called "fun theory,"[1] and AI, which is his field of study.
On the whole, LessWrong is worth checking out, but it has some rather annoying features that we will now proceed to give undue weight to.
Contents |
The good
Less Wrong is a highly active and sizable forum. Due to its karma system, whereby contributers gain points for making smart and insightful comments and its ability to promote discussions to its front page, its content is usually very high quality. The karma system almost entirely eradicates trolling, at least in the sense that most people would recognise it. Almost every article is thought-provoking and often links to others, resulting in an unwitting tab explosion - each one packed with increasingly fascinating insights into why you, yes you, are an idiot.
Eliezer Yudkowsky's writings on the subject of cognitive biases and rational thinking are among the best on the internet. These are expanded on his own site, and in the archives of Overcoming Bias, a site where he once contributed before setting up Less Wrong. The Less Wrong wiki and sequences are intensive and lengthy explorations of human rationalism, and while they may be far too esoteric and complex for fledgling rationalists, can be considered near essential reading for anyone wanting to take that next step to "hardcore" rationalism.
The bad
Despite the clarity of writing in some of the posts, it's often difficult to understand what the hell is going on in many of the discussions unless you're a frequent visitor who knows what is going on. So it is a difficult community to get fully "in" with. Some standalone works by the top contributors are a little easier to comprehend, but diving straight into a real discussion on your first visit is more or less impossible. The community isn't always kind to newcomers. New users are expected to understand Bayesian inference (preferably the Less Wrong way) and have a basic understanding of almost every scientific field and a firm grasp of mathematics.
Members of the Less Wrong community are expected to be on board with the singularitarian/transhumanist/cryonics[2] bundle. As many of them are actively involved with singularity and artificial intelligence research, then this bias may be understandable. Anyone without a favourable view of these concepts is likely to be shunned as a closed-minded moron.
The site has also been criticised for practically being a personality cult of Eliezer Yudkowsky. This is almost certainly not intentional on his part, just ask Brian of Nazareth.
The ugly
Less Wrong does have its ugly side. Almost claiming ownership of the word "rationalism" has been a contentious issue as one cannot say that they're rational unless they're fully on board with Bayesian thinking amongst other things. It's unclear whether Descartes, Spinoza or Leibniz would have lasted a day without being voted down into oblivion. Indeed, if anyone even hints at trying to claim to be a "rationalist" but doesn't write exactly what is expected, they're likely to be treated with contempt - as criticism of RationalWiki and the Richard Dawkins forum seem to attest to.
Within the more esoteric discussions of artificial intelligence, there is an idea that is so dangerous, so utterly Cthulhuian in nature that it needs to be censored from Less Wrong for everyone's safety.[3] Long-term contributor Roko left Less Wrong altogether after posting it, deleting all his contributions.[4] According to Yudkowsky and his supporters, simply knowing this idea makes it more likely to become true in the real world, leading to an artificial intelligence harming humanity; as such, they keep us safe by deleting any posts with this one dangerous idea. This is the actual reasoning behind the censorship, suggesting that Yudkowsky may be tap dancing on the fine line between visionary and madman.[5][6] Some LessWrong posters actually had nightmares about this — as members of a "rationalist" movement that literally believes in a Hell if you get life wrong.[7] Occasionally a poster will complain about the idea being deleted,[8] rather than, e.g., writing it up as science fiction.[9]
In the outside world, the ugly manifests itself as LessWrong acolytes, minds freshly blown, metastasising to other sites, bringing the Good News for Modern Rationalists, without clearing their local jargon cache.
How to spot a LessWrongian in your sceptical discussion
Sceptical discussion spaces will often have people show up labelling themselves "rationalist" and being as irritating as teenage nerds who've just discovered Ayn Rand. Take one drink for each of:
- "As rationalists, we should ..."
- "As a rationalist, you should ..."
- "Bayesian" (particularly when they seem to have picked a real doozy of a prior probability; no working will ever be shown)
- Preaching "rationalism" as if it's a religion substitute.[10]
- Attempting to argue their points with them results in being told to
take a communications coursestudy several LessWrong "sequences"[11] and read hundreds of LessWrong articles rather than addressing whatever the point was.[12] - "Eliezer Yudkowsky suggests ..."
So why are they like this?
A common way for very smart people to be stupid is to think they can think their way out of being apes with pretensions. However, there is no hack that transcends being human. Playing a "let's pretend" game otherwise doesn't mean you win all arguments, or any. Even with the best intentions and knowledge about biases and rational thinking, you won't transcend and avoid the pitfalls of having a brain designed, in the words of Science of Discworld, to shout at monkeys in the next tree. This doesn't mean you shouldn't give it a damn good try and LessWrong gives it a better shot than most, but remember that you, yes you, are an idiot.
You'll be unsurprised to know that the LessWrong community self-diagnoses themselves as being on the Asperger's/autism spectrum.[13] They do all this because they are bad at human interaction.[14]
See also
External links
- LessWrong
- RationalWiki is only "rational" in quotation marks, unfortunately
Footnotes
- ↑ http://lesswrong.com/lw/y0/31_laws_of_fun/
- ↑ "I'm going to come out and say it: If you don't sign up your kids for cryonics then you are a lousy parent." (Eliezer Yudkowsky, Normal Cryonics, 19 Jan 2010)
- ↑ It's not, however, because it was on the web long enough for Google to stash a copy here.
- ↑ http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2c46?c=1
- ↑ http://lesswrong.com/lw/2i4/contrived_infinitetorture_scenarios_july_2010/2bh5?c=1#2bsq
- ↑ http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2c6y?c=1
- ↑ http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2dg1?c=1
- ↑ http://lesswrong.com/lw/2i4/contrived_infinitetorture_scenarios_july_2010/2bh5?c=1
- ↑ http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2dmh?c=1
- ↑ e.g. Eliezer Yudkowsky on fanfic.net. At least he's using Harry Potter fanfic rather than a new Atlas Shrugged. And, unlike Rand, can write.
- ↑ http://wiki.lesswrong.com/wiki/Sequences
- ↑ Less is More - or: the sorry state of AI friendliness discourse (Stefan Pernar, Rational Morality blog, 2009-11-06)
- ↑ Aspergers Poll Results: LW is nerdier than the Math Olympiad? Of course, this poll was linked at the end of a long article on Asperger's Syndrome, so the self-selection bias was overwhelming.
- ↑ This discussion is particularly interesting in terms of the social anthropology of LessWrongians. Note the downvoting of those pointing out that mating behaviour is rather important to what makes humans human.