We are 100% user-supported!
Without you, there is no RationalWiki!
| Goal: $5000 | Donations so far: $2605 | ||
| |||
Help and donate today!
LessWrong
David Gerard (Talk | contribs) m (Reverted edits by Baloney Detection (talk) to last revision by David Gerard) |
(I have decided not to accept the SAI's grant to publish on TDT -Rachael Briggs) |
||
| Line 92: | Line 92: | ||
:The 1,090 respondents to the 2011 LessWrong survey gave the Many Worlds Interpretation a ~50% chance of being substantively correct.<ref>http://lesswrong.com/lw/8p4/2011_survey_results/</ref> The number of these who could understand, let alone solve, the Schrodinger equation for the hydrogen atom was not given. | :The 1,090 respondents to the 2011 LessWrong survey gave the Many Worlds Interpretation a ~50% chance of being substantively correct.<ref>http://lesswrong.com/lw/8p4/2011_survey_results/</ref> The number of these who could understand, let alone solve, the Schrodinger equation for the hydrogen atom was not given. | ||
* Logic problems like {{wpl|Newcomb's paradox}}, involving hypothetical perfect philosophical constructs<ref>http://wiki.lesswrong.com/wiki/Newcomb%27s_problem</ref> are taken seriously as things that must be soluble in a useful personal philosophy.<ref name=newcombhappens>http://lesswrong.com/lw/1zw/newcombs_problem_happened_to_me/ - apparently, someone who has a script for your life and never mind what you think is effectively Omega, and the appropriate response is not "get the hell out of the twisted manipulative relationship" but "invent timeless decision theory."</ref> (The justification is that Newcomb-like decision problems might actually happen for an AI interacting with someone who had a copy of its source code and data. {{wpl|Halting problem}} notwithstanding.) | * Logic problems like {{wpl|Newcomb's paradox}}, involving hypothetical perfect philosophical constructs<ref>http://wiki.lesswrong.com/wiki/Newcomb%27s_problem</ref> are taken seriously as things that must be soluble in a useful personal philosophy.<ref name=newcombhappens>http://lesswrong.com/lw/1zw/newcombs_problem_happened_to_me/ - apparently, someone who has a script for your life and never mind what you think is effectively Omega, and the appropriate response is not "get the hell out of the twisted manipulative relationship" but "invent timeless decision theory."</ref> (The justification is that Newcomb-like decision problems might actually happen for an AI interacting with someone who had a copy of its source code and data. {{wpl|Halting problem}} notwithstanding.) | ||
| − | * Timeless Decision Theory (TDT) is a homebrewed decision theory designed to solve such logic problems.<ref>http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/</ref> This theory has not been published outside LessWrong (or in full on LessWrong), and outside LessWrong it's only discussed in out-of-the-way technical mailing lists | + | * Timeless Decision Theory (TDT) is a homebrewed decision theory designed to solve such logic problems.<ref>http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/</ref> This theory has not been published outside LessWrong (or in full on LessWrong), and outside LessWrong it's only discussed in out-of-the-way technical mailing lists. There are several other such local theories.<ref>http://wiki.lesswrong.com/wiki/Decision_theory</ref> As noted [[#Roko.27s_Basilisk|above]], they can result in some odd behaviours. |
* [[Timeless physics]] in general is approved of by Yudkowsky hence popular locally. | * [[Timeless physics]] in general is approved of by Yudkowsky hence popular locally. | ||
* Yudkowsky got annoyed at other AI researchers using the word "[[emergent phenomena|emergent]]" as a handwaving explanation,<ref>http://lesswrong.com/lw/iv/the_futility_of_emergence/</ref> possibly because he has admitted being a born-and-bred [[reductionist]]. More likely, Yudkowsky doesn't actually understand emergence. He holds, and regulars enforce, that it must be expunged from the vocabulary of all rational persons. | * Yudkowsky got annoyed at other AI researchers using the word "[[emergent phenomena|emergent]]" as a handwaving explanation,<ref>http://lesswrong.com/lw/iv/the_futility_of_emergence/</ref> possibly because he has admitted being a born-and-bred [[reductionist]]. More likely, Yudkowsky doesn't actually understand emergence. He holds, and regulars enforce, that it must be expunged from the vocabulary of all rational persons. | ||
Revision as of 13:21, 28 October 2012
| Someone is wrong on |
| Log in: |
LessWrong is dominated by the ideas of Eliezer Yudkowsky, a research fellow for the Singularity Institute for Artificial Intelligence. Yudkowsky has generally blogged about cognitive science, "fun theory" (what we will do when we are all immortal[1]) and AI, which is his field of study. He has been adapting his blog posts into a book on human rationality[2] until it was put on hold[3] in favor of more important projects[4]. He either failed to deliver one of those projects[5] or decided to abandon it as previous projects[6].
On the whole, LessWrong is worth checking out, but it has some rather annoying features that we will now proceed to give undue weight to.
Contents |
History
Yudkowsky was blocked on writing, so started a series of posts on the blog Overcoming Bias[7] with his friend Robin Hanson, an economist at GMU who invented the prediction market. This attracted a fan base, so LessWrong was put together as a more collaborative blog where anyone could post on topics of local interest (rationality, philosophy, AI, transhumanism).
LessWrong is run by the Singularity Institute for Artificial Intelligence (SIAI), the nonprofit Yudkowsky founded and works for. Yudkowsky considers LessWrong useful insofar as it advances SIAI's work,[8] and the site is a key venue for SIAI recruitment[9] and fundraising.[10] Discussion posts will often be on SIAI-related topics, rather than rationality per se.
LessWrong's community meme base comes from transhumanism, inherited through the presence of Yudkowsky and other regulars on Overcoming Bias, the SL4 mailing list[11] and back to the Extropians mailing lists[12] of the 1990s. It's functionally a transhumanist site with a particular interest, rather than a site about the particular interest.
The good
Less Wrong is a highly active and sizable forum. Due to its karma system, whereby contributors gain points for making smart and insightful comments and its ability to promote discussions to its front page, its content is usually very high quality. Almost every article is thought-provoking and often links to others, resulting in an unwitting tab explosion - each one packed with increasingly fascinating insights into why you, yes you, are an idiot.[13]
The karma system almost entirely eradicates trolling, at least in the sense that most people would recognise it, though the system has its detractors, arguing that it serves to reinforce already existing beliefs and tropes more than anything else.[14] The moderation system succeeds in keeping the comments high-quality; there are standard beliefs and some groupthink, but you can still get upvoted for quite cutting criticisms[15] if you show in your comment that you've done your homework and understand what you're objecting to (or can at least throw around the local jargon fluently).
Eliezer Yudkowsky's writings on the subject of cognitive biases and rational thinking are among the best on the internet. The Less Wrong wiki and sequences are intensive and lengthy explorations of human rationalism, though anyone who is moderately scientifically literate is unlikely to find much that is terribly new[16]. Yudkowsky writes about the correspondence theory of truth[17] as well as the importance of falsifiability,[18] both of which are basic to science. Nevertheless, it has been suggested[19] that it would take a vast amount of reading and personal research to obtain the information contained in the sequences elsewhere.
Less Wrong philosophy in a nutshell
Although most posters don't consider Less Wrong to be "mainstream" philosophy, it has been compared to Wittgenstein who seem to best represent Yudkowsky and co's views on how language limits the ability for rationalists to communicate, and Quine,[20] whose approach to naturalism and science reflects the empiricism and reductionism of LW. Gary Drescher[wp]'s excellent-but-dense Good and Real[21] covers a lot of the same ground as the Sequences and came out around when the Sequences started; Yudkowsky had not read it before finishing them, but approves of the book.
A key part of the Less Wrong approach to human rationality is to avoid "fallacies of compression" and mistaking the map for the territory, which is the result of humans trying to fit a vastly huge universe into a relatively small and squishy piece of meat located between their ears. According to Yudkowsky, beliefs should constrain our expectations and those that are true no matter what we see are what constitutes blind faith - for example, two people arguing "whether a tree falling in a forest, with no one around to hear it, makes a sound" might argue yes and no based on different definitions of "sound" but wouldn't actually expect anything different. Precision, therefore, is the order of the day and is achieved through expectations and sensory anticipation, rather than merely saying something is true and arguing through clever wordplay.
Techniques such as rationalist taboo are used to help this along, by removing preconceived inferences about some terms and forcing people to describe what they expect to see and to describe that thoroughly before proceeding. Contrasted to the usual philosophical approach to "unpacking" - which aims to to describe and define terms in advance - the "taboo" game aims to get those using to move towards less abstract descriptions rather than more abstract. For instance, describing "red" in terms of examples of red objects, rather than describing it as a "colour" and then as a "property" and then attempting to explain what a "property" is in more abstract terms.
Beyond this, mathematical precision in the approach to rationalism is also endorsed. Particularly Bayesian approaches to updating beliefs about hypotheses based on evidence, and sometimes Aumann's agreement theorem is invoked to lead people to agree with each other based on the right evidence and the "right" priors. While most LW posters tend to agree on this methodology, they sometimes do clash when it comes to using them to reach useful conclusions about the futurism discussed on site. Particularly if any aspect of what's being discussed could even tangentially be described as "politics."
The bad
Despite the clarity of writing in some of the posts, it's often difficult to understand what the hell is going on unless you're a frequent visitor who knows the local patois,[22] and diving straight into a real discussion on your first visit is more or less impossible. They aren't always kind to newcomers. Users are expected to buy into Bayesian inference and epistemology (preferably the Less Wrong way) and have a basic understanding of almost every scientific field and a firm grasp of mathematics.[23]
It is a community norm that the Less Wrong community are expected to be on board with the singularitarian/transhumanist/cryonics[24] bundle, though the readership doesn't necessarily sign on with this.[25] As most of the highest-voted participants are actively involved with singularity and artificial intelligence research, this view may be understandable. Anyone without a favourable view of these concepts is likely to be shunned as a closed-minded moron.
If you indicate your disagreement with the local belief clusters without at least using their jargon, it used to be common for someone to helpfully suggest that "you should try reading the sequences" before attempting to talk to them. The "sequences"[26] are several collated series of Yudkowsky's blog posts, written as first draft raw material for a planned book as part of a two-year blog-a-day marathon. There are eighteen sequences in all. The indexes for just the four "core sequences"[27] are somewhere north of 10,000 words. Those link to over a hundred and fifty 2,000-3,000-word blog posts. That's about 300,000-450,000 words for those four, and around a million words for the lot.[28] With a few million more words of often-relevant comments. For comparison, the Lord Of The Rings trilogy is 473,000 words.[29] As such, "You should try reading the sequences" is LessWrong for "fuck you." This seems to have stopped since it was called to their attention.
The site has also been criticized for practically being a personality cult of Eliezer Yudkowsky. This is almost certainly not intentional on his part, just ask Brian of Nazareth.[30] Though referring to themselves as a "phyg"[31] instead of a cult is approved practice at LessWrong in order to not get "cult" associated with LessWrong by Google.[32]
Ironically, Less Wrong users rarely recognize biases that arise from the site's demographics[33], which can be summarized as the same problem in academic psychology of samples being WEIRD: mostly male, white, white-collar, 20-30-year-old United States residents coming from families with a Christian or Jewish background. When pointed out the sources and instances of collective bias, they typically ignore them or say that "this is just how things are here."[34]
Yudkowsky has the annoying habit of inventing new terms for already existing concepts, eg. the compabilitism position on free will/determinism is called "requiredism"[35] (although as "requiredism" illustrates, they are often ignored by LWers).
The ugly
Less Wrong does have its ugly side. Almost claiming ownership of the word "rationality"[36] has been a contentious issue as one cannot say that they're rational unless they're fully on board with Bayesian thinking in a way that adds up to local LessWrong norms, amongst other things. It's unclear whether Descartes, Spinoza or Leibniz would have lasted a day without being voted down into oblivion. Indeed, if anyone even hints at trying to claim to be a "rationalist" but doesn't write exactly what is expected, they're likely to be treated with contempt, as criticism of RationalWiki and the Richard Dawkins forum seem to attest to. (Any group that resorts to saying its critics are doing skepticism wrong[37] is already fucked and is never going anywhere.)
"Politics is the mindkiller" is the mindkiller. The meme came from a Yudkowsky post[38] about how politics-related discussion reliably goes off the rails. This has developed into a community aversion to anything even tangentially political — even the word "politics" itself is avoided and euphemised as "mindkilling" as a discussion-stopper. This gets wacky when the discussion is of an actual existential risk to humanity, global warming, or indeed almost anything else actually practical, despite the site's claimed serious interest in existential risk, as these genius autodidacts show just how to use their newfound rationality skills for rationalisation. Libertarian politics are of course the neutral baseline,[39] it's other politics that are mindkilling.
Roko's Basilisk
“”The original “basilisk” involved imagining a post-singularity AI in the future of our world which will send you to transhuman hell after the singularity, if you don’t do everything you could in the past (i.e. our present) to make it a friendly singularity. Rather than openly and rationally discuss whether this is a sensible “threat” at all, or just an illusion, the whole topic was hurriedly hidden away. And thus a legend was born.
|
| —Mitchell Porter on LW (the comment since having been removed by the mods) |
Yudkowsky is a firm believer in strong forum moderation[40] and has no qualms calling it "censorship," even when applied to cranks and idiots.[41] This does not sit well with some.
One source of considerable HCM was the "Forbidden Post", or "basilisk". Yudkowsky is interested in what could be described as causality that goes backwards in time: future events "causing" past events[42] by the mechanism of having something in the present simulate what someone will do in the future and using the results in the present, e.g. not giving a gun to someone you predict will shoot you. This gets odd when you imagine superintelligences, because their predictions of human behaviour may be near-perfect, as in Newcomb's paradox[wp]. Roko (a top contributor at the time) wondered if a future Friendly AI would punish people who didn't do everything in their power to further the AI research from which this AI originated, by at the very least donating all they have to it. He reasoned that every day without AI, bad things happen (150,000+ people die every day, war is fought, millions go hungry) and a future Friendly AI would want to prevent this, so it might punish those who understood the importance of donating but didn't donate all they could. He then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish them. That final thought proved too much for some LessWrong readers, who then had nightmares about being tortured for not donating enough to SIAI.[43] Yudkowsky replied to Roko's post calling him names and claiming that posting such things on an Internet forum could cause incalculable harm to the people who read it. Four hours later, he deleted Roko's post,[44] including all comments. Roko left LessWrong, deleting his thousands of posts and comments.[45] (He later briefly returned [46] and posted among other things that "I agree that the post in question should not appear in public"[47] and "I wish I had never learned about any of these ideas"[48].)
One frustrated poster then protested this censorship with a threat to increase existential risk, i.e. to do things to make some end-of-the-world catastrophe ever so slightly more likely, in their case by sending some emails to right-wing bloggers which they thought might make some harmful regulation more likely to pass.[49] The poster said they'd do this every time they saw a post get censored.[50] [51] LessWrong took this threat seriously, though Yudkowsky didn't yield.[52]
The matter is now the occasional subject of contorted LW posts, as people try to discuss the issue without talking about what they're talking about,[53][54] and is a reliable space-filler for journalists covering LW-related stories.[55] The moderators occasionally sweep through LessWrong removing basilisk discussion.[56][57]
How to spot a LessWrongian in your sceptical discussion
In the outside world, the ugly manifests itself as LessWrong acolytes, minds freshly blown, metastasising to other sites, bringing the Good News for Modern Rationalists, without clearing their local jargon cache. Sceptical discussion spaces will often have people show up labelling themselves "rationalist" and being as irritating as teenage nerds who've just discovered Ayn Rand. Take one drink for each of:
- "As rationalists, we should ..."
- "As a rationalist, you should ..."
- "Bayesian" (particularly when they seem to have picked a real doozy of a prior probability; no working will ever be shown)
- Preaching "rationalism" as if it's a religion substitute.[58][59][60]
- Taking offense when someone inevitably points out that they're preaching a religion substitute.
- Attempting to argue their points with them resulting in being told "You should try reading the sequences" rather than them addressing your point.[61]
- "Eliezer Yudkowsky suggests ..."
- They believe themselves to be experts in areas in which they in fact are clueless. Examples include the Muehlhauser-Wang dialogue,[62] quantum mechanics,[63] and risks from AI.[64]
So why are they like this?
A common way for very smart people to be stupid is to think they can think their way out of being apes with pretensions. However, there is no hack that transcends being human. Playing a "let's pretend" game otherwise doesn't mean you win all arguments, or any. Even with the best intentions and knowledge about biases and rational thinking, you won't transcend and avoid the pitfalls of having a brain designed, in the words of Science of Discworld, to shout at monkeys in the next tree. This doesn't mean you shouldn't give it a damn good try and LessWrong gives it a better shot than most, but remember that you, yes you, are an idiot. [13]
You'll be unsurprised to know that many in the LessWrong community self-diagnose themselves as being on the Asperger's/autism spectrum.[65] They do all this because they are bad at human interaction.[66][67] It is not surprising to learn that there is an interest in seduction at LessWrong.[68]
LessWrong is recognized as a substitute for religion for atheists in certain corners of the Internet.[69]
Local beliefs and tropes
Despite the mission statement, there are a few things that are assumed by the site to be understood by its readers. These can confuse and trip up unaware newcomers.
- Yudkowsky has declared that the many worlds interpretation of quantum mechanics is correct,[70] despite the lack of testable predictions differing from any other interpretation (and despite admittedly not being a physicist, and ignoring the qualms of physicists[71][72] and chemists[73] concerning his presentation of idiosyncratic personal interpretations as normative), then uses this as support for his philosophical[74] and moral[75] conclusions, including that his version of Bayesian epistemology is superior to his version[76] of science.[77] Exactly from where he got the idea that there is, or could be, a conflict between the scientific method and Bayes' theorem remains a mystery, as hardly a single scientist or philodsopher of science recognizes any such conflict (find me one scientist or philosopher of science of this view). Many philosophers of science think that Bayesianism is a very successful version of the practice of science.[78]
- The 1,090 respondents to the 2011 LessWrong survey gave the Many Worlds Interpretation a ~50% chance of being substantively correct.[79] The number of these who could understand, let alone solve, the Schrodinger equation for the hydrogen atom was not given.
- Logic problems like Newcomb's paradox[wp], involving hypothetical perfect philosophical constructs[80] are taken seriously as things that must be soluble in a useful personal philosophy.[67] (The justification is that Newcomb-like decision problems might actually happen for an AI interacting with someone who had a copy of its source code and data. Halting problem[wp] notwithstanding.)
- Timeless Decision Theory (TDT) is a homebrewed decision theory designed to solve such logic problems.[81] This theory has not been published outside LessWrong (or in full on LessWrong), and outside LessWrong it's only discussed in out-of-the-way technical mailing lists. There are several other such local theories.[82] As noted above, they can result in some odd behaviours.
- Timeless physics in general is approved of by Yudkowsky hence popular locally.
- Yudkowsky got annoyed at other AI researchers using the word "emergent" as a handwaving explanation,[83] possibly because he has admitted being a born-and-bred reductionist. More likely, Yudkowsky doesn't actually understand emergence. He holds, and regulars enforce, that it must be expunged from the vocabulary of all rational persons.
- Yudkowsky is the best the AI field has to offer, and everyone else is stupid.[84]
- The jargon of AI, transhumanism and philosophy are part of the local vocabulary.[85]
- Amanda Knox[86] is innocent.[87][88] (This is not inherently strange and is worked out reasonably, but was accepted as a usably proven fact well before the appeal succeeded.)
- Published philosophers and scientists admired by the LessWrong cluster include: E.T. Jaynes; I.J. Good; Judea Pearl; Gary Drescher; Kahneman and Tversky; Tooby and Cosmides.
- Consequentialism is the correct moral theory.[89]
- Rational people use the LessWrong jargon.[90]
- Stephen Jay Gould was a fraud and you must do your best to un-believe everything he ever wrote.[91] Particularly The Mismeasure Of Man,[92] since the IQ differences between races are obvious.[93] The
scientific racracial realihuman biodiversity crowd are thus tolerated more than one might have otherwise expected, VDARE reference links and all. - Yudkowsky is a genius who has solved many big philosophical questions.[94][95]
See also
External links
- LessWrong
- The Problem With Rational Wiki
- Greg Egan disses SIAI
- PoE News on Lesswrong. In particular, "Pretend there is a website of trans-accountants who have never had an accounting job nor had any education in accounting."
Footnotes
- ↑ http://lesswrong.com/lw/y0/31_laws_of_fun/
- ↑ http://lesswrong.com/r/discussion/lw/2u3/request_interesting_invertible_facts/
- ↑ http://lesswrong.com/lw/d06/intellectual_insularity_and_productivity/6swt
- ↑ http://lesswrong.com/lw/dm9/revisiting_sis_2011_strategic_plan_how_are_we/
- ↑ http://predictionbook.com/predictions/8093
- ↑ http://lesswrong.com/lw/dm9/revisiting_sis_2011_strategic_plan_how_are_we/
- ↑ http://www.overcomingbias.com/
- ↑ "mportant and interesting in proportion to how much it helps construct a Friendly AI"
- ↑ http://web.archive.org/web/20110621192259/http://singinst.org/achievements
- ↑ http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/
- ↑ http://sl4.org/
- ↑ http://www.extropy.org/emaillists.htm
- ↑ 13.0 13.1 Less Wrong - Running on Corrupted Hardware
- ↑ http://kruel.co/2012/07/30/how-the-lesswrong-reputation-system-sucks/
- ↑ e.g. the top scoring post.
- ↑ http://lesswrong.com/lw/eik/eliezers_sequences_and_mainstream_academia/
- ↑ http://yudkowsky.net/rational/the-simple-truth
- ↑ http://lesswrong.com/lw/jl/what_is_evidence/
- ↑ http://lesswrong.com/lw/eik/eliezers_sequences_and_mainstream_academia/7gfx
- ↑ Less Wrong Rationality and Mainstream Philosophy
- ↑ http://mitpress.mit.edu/catalog/item/default.asp?tid=10902&ttype=2
- ↑ Jargon (Lesswrongwiki)
- ↑ http://lesswrong.com/lw/2un/references_resources_for_lesswrong/
- ↑ "I'm going to come out and say it: If you don't sign up your kids for cryonics then you are a lousy parent." (Eliezer Yudkowsky, Normal Cryonics, 19 Jan 2010)
- ↑ http://lesswrong.com/lw/89a/2011_less_wrong_census_survey/ - the average estimate of success is <20%, a third of the readership rejects it (possibly on standard LW ethical grounds), and only half are "considering it".
- ↑ Less Wrong - Sequences
- ↑ Core Sequences (LessWrong)
- ↑ http://lesswrong.com/lw/2kk/book_recommendations/37n7
- ↑ Wordcount of popular (and hefty) epics (Aballeno, The Cesspit, 2009-03-06)
- ↑ e.g. [1], [2]
- ↑ Not this one, sadly.
- ↑ Our Phyg Is Not Exclusive Enough
- ↑ Survey results
- ↑ LW systemic bias: US centrism
- ↑ http://lesswrong.com/lw/r0/thou_art_physics/
- ↑ Not rationalism, apparently.
- ↑ Undiscriminating Skepticism (EY, 2010-03-14)
- ↑ http://lesswrong.com/lw/gw/politics_is_the_mindkiller
- ↑ Yudkowsky notes a "large overlap in the communities" between LessWrong and seasteading.
- ↑ http://lesswrong.com/lw/c1/wellkept_gardens_die_by_pacifism/
- ↑ http://lesswrong.com/lw/io/is_molecular_nanotechnology_scientific/eox
- ↑ http://wiki.lesswrong.com/wiki/Timeless_decision_theory
- ↑ See the "self-diagnosis" bit in #So why are they like this? below... er, above.
- ↑ Though a google cache of it survived - please do not read if you are prone to nightmares or OCD and/or are worried about causing incalculable harm to yourself.
- ↑ http://lesswrong.com/user/Roko
- ↑ http://lesswrong.com/user/formallyknownasroko
- ↑ http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33w7
- ↑ http://lesswrong.com/lw/38u/best_career_models_for_doing_research/344l
- ↑ http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33vl
- ↑ http://lesswrong.com/lw/39l/how_to_loose_100_karma_in_6_hours_what_just/
- ↑ http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2o25?c=1
- ↑ http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33v3
- ↑ http://lesswrong.com/lw/39z/should_lw_have_a_public_censorship_policy/img
- ↑ http://lesswrong.com/lw/ds4/article_about_lw_faith_hope_and_singularity/73ao
- ↑ http://betabeat.com/2012/07/singularity-institute-less-wrong-peter-thiel-eliezer-yudkowsky-ray-kurzweil-harry-potter-methods-of-rationality/?show=all
- ↑ File:Lw-basilisk-censorship.png, File:Lw-basilisk-censorship2.png
- ↑ e.g. this burning the evidence, which is preserved here.
- ↑ e.g. Eliezer Yudkowsky on fanfic.net. At least he's using Harry Potter fanfic rather than a new Atlas Shrugged. And, unlike Rand, he can actually write.
- ↑ This essay, while entertaining and useful, can be seen as Yudkowsky trying to reinvent the sense of awe associated with religious experience in the name of rationalism. It's even available in tract format.
- ↑ http://measureofdoubt.com/2011/11/30/how-rationality-can-make-your-life-more-awesome/
- ↑ Less is More - or: the sorry state of AI friendliness discourse (Stefan Pernar, Rational Morality blog, 2009-11-06)
- ↑ http://lesswrong.com/lw/bxr/muehlhauserwang_dialogue/
- ↑ http://lesswrong.com/lw/2l0/should_i_believe_what_the_siai_claims/2f14
- ↑ http://kruel.co/2012/08/15/qa-with-experts-on-risks-from-ai-5/
- ↑ Aspergers Poll Results: LW is nerdier than the Math Olympiad? Of course, this poll was linked at the end of a long article on Asperger's Syndrome, so the self-selection bias was overwhelming.
- ↑ This discussion is particularly interesting in terms of the social anthropology of LessWrongians. Note the downvoting of those pointing out that mating behaviour is rather important to what makes humans human.
- ↑ 67.0 67.1 http://lesswrong.com/lw/1zw/newcombs_problem_happened_to_me/ - apparently, someone who has a script for your life and never mind what you think is effectively Omega, and the appropriate response is not "get the hell out of the twisted manipulative relationship" but "invent timeless decision theory."
- ↑ http://lesswrong.com/lw/d9/selfpretending_is_not_as_useful_as_we_think/cow
- ↑ http://www.escapistmagazine.com/forums/read/18.385486-Your-favourite#15327698
- ↑ http://lesswrong.com/lw/r8/and_the_winner_is_manyworlds/
- ↑ "physicists cringe when non-professional-physicists say they're going to talk about QM."
- ↑ For viewpoints of non-many-worlders, see, e.g. Bub and Pitowsky (2007), Döring and Isham (2011), Hohenberg (2009) and (2011), Kent (2009), and Peres (1995); for Bayesian non-many-worlders, see John Baez (2003) and particularly Chris Fuchs (2002) and other "QBists" like Barnum (2010).
- ↑ e.g. [3], [4], [5], [6], [7], [8], [9], [10]
- ↑ e.g. [11], [12]
- ↑ http://lesswrong.com/lw/rh/heading_toward_morality/
- ↑ http://lesswrong.com/lw/qc/when_science_cant_help/k1j
- ↑ http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/
- ↑ http://rationallyspeaking.blogspot.com/2010/09/eliezer-yudkowsky-on-bayes-and-science.html
- ↑ http://lesswrong.com/lw/8p4/2011_survey_results/
- ↑ http://wiki.lesswrong.com/wiki/Newcomb%27s_problem
- ↑ http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/
- ↑ http://wiki.lesswrong.com/wiki/Decision_theory
- ↑ http://lesswrong.com/lw/iv/the_futility_of_emergence/
- ↑ Above-Average AI Scientists (2008-09-28)
- ↑ A helpful jargon guide.
- ↑ See the Wikipedia article on Murder of Meredith Kercher.
- ↑ The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom (komponisto, 2009-12-13)
- ↑ Also given a hat-tip in the Harry Potter fanfic.
- ↑ http://lesswrong.com/lw/fk/survey_results/
- ↑ http://lesswrong.com/lw/bxr/muehlhauserwang_dialogue/6g67
- ↑ http://lesswrong.com/lw/kv/beware_of_stephen_j_gould/
- ↑ http://lesswrong.com/lw/65b/scientific_misconduct_misdiagnosed_because_of/
- ↑ http://lesswrong.com/lw/kk/why_are_individual_iq_differences_ok/
- ↑ http://lesswrong.com/lw/9gy/the_singularity_institutes_arrogance_problem/5pa5
- ↑ http://lesswrong.com/lw/cfe/i_stand_by_the_sequences/