Bronze-level articleLessWrong

From RationalWiki
Jump to: navigation, search
Carefully, correctly

LessWrong

TrollRidingBask.jpg
Get rational

LessWrong is a community blog focused on "refining the art of human rationality." To this end, it focuses on identifying and overcoming bias, improving judgment and problem-solving, and speculating about the future. The blog is based on the ideas of Eliezer Yudkowsky, a research fellow for the Machine Intelligence Research Institute (until recently known as the Singularity Institute for Artificial Intelligence); many members of LessWrong share Yudkowsky's interests in transhumanism, artificial intelligence, the Singularity, and cryonics.

The content of LessWrong is occasionally articulate, innovative, and thoughtful. However, the community's focused demographic and narrow interests have also produced an insular culture that is heavy with its own peculiar jargon and established ideas - sometimes these ideas might benefit from a better grounding in reality.

Contents

[edit] History

In July of 2000, Eliezer Yudkowsky founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI) to "create a friendly, self-improving artificial intelligence."[1] In 2006, Yudkowksy began contributing to Overcoming Bias along with GMU economist Robin Hanson. After several years and increasing popularity, Yudkowksy started a collaborative blog/community to focus on some topics of particular interest to himself and SIAI, such as rationality, philosophy, AI, and transhumanism.[2] Overcoming Bias remains as a "sister blog" to LessWrong, where Hanson and others discuss ideas stemming from Hanson's speculations on economics.

SIAI (now MIRI), where Yudkowsky remains a research fellow, hosts and maintains LessWrong[3] to provide "an introduction to issues of cognitive biases and rationality relevant for careful thinking about optimal philanthropy and many of the problems that must be solved in advance of the creation of provably human-friendly powerful artificial intelligence."[4] Yudkowsky considers LessWrong useful insofar as it advances SIAI's work,[5] and the site is a key venue for SIAI recruitment[6] and fundraising.[7] The most popular post of all time on LessWrong, for example, is an assessment of SIAI by charity evaluator GiveWell.[8]

Cover image of Harry Potter and the Methods of Rationality
LessWrong originally attracted the bulk of its userbase from communities interested in transhumanism. In addition to Overcoming Bias, these communities include the SL4 mailing list[9] and the Extropians mailing lists[10] (dating back to the 1990s). Accordingly, LessWrong has long been an essentially transhumanist community, emphasizing a focus on rationality per se in order to attract those who might otherwise be skeptical of apocalyptic AI.[11]

Increasingly since 2010, many of the newcomers to LessWrong have been introduced to the site from "Harry Potter and the Methods of Rationality",[12][13] a popular work of Harry Potter fanfiction written by Yudkowksy. The future focus of LessWrong is unclear, but discussions about rationality have led to the 2012 formation of the Center For Applied Rationality (CFAR). CFAR is devoted to researching methods of teaching rationalism, and holding retreats and summer camps to pass these methods on to others.[14]

[edit] Content

The core of LessWrong are its many parables, metaphors, and explanations of concepts in psychology and philosophy. The popularity of such essays as Yudkowsky's "The Genetic Fallacy," which clearly explains the genetic fallacy and works out some of its potential complications,[15] helped attract the growing community — even luring in those who might not be otherwise interested in transhumanism. While some critics have implied that this is perhaps some sort of deception, the peculiarly focused interests of LessWrong's most prominent members has no reflection on the usefulness of some of its great resources. In other words: just because Eliezer Yudkowsky wants to be a robot doesn't mean that his explanation of Bayes' Theorem isn't interesting and well-written.

Bayesianism, which uses that theorem to assess probabilities when making decisions, has become one of the hallmarks of LessWrong and its most frequent buzzword. Most of this jargon[16] comes from the much-revered "Sequences," a long series of essays by Yudkowsky that are considered essential reading by members of the community. Their extraordinary length can be prohibitive, however (surpassing J.R.R. Tolkien's Lord of the Rings). Such time might be better spent reading some of the books written by the actual researchers behind the Sequences' concepts, such as Thinking, Fast and Slow by Daniel Kahneman, The Black Swan by Nassim Taleb, or Thinking and Deciding by Jonathan Baron — all of which are easily readable. This does not negate the value of the Sequences, particularly if you are interested in the transhumanist ends to which many of them turn (neither Kahneman, Taleb, nor Baron address the potential yet highly speculative future problems of godlike AI); but has been suggested that the "required reading" status of the Sequences might be partially prompted by the same cognitive dissonance that helps perpetuate exclusive clubs: it was hard to read them all, but I did it, therefore they must be good.[wp]

A key part of the Less Wrong approach to human rationality is to avoid "fallacies of compression" and mistaking the map for the territory, which is the result of humans trying to fit a vastly huge universe into a relatively small and squishy piece of meat located between their ears. According to Yudkowsky, beliefs should constrain our expectations and those that are true no matter what we see are what constitutes blind faith — for example, two people arguing "whether a tree falling in a forest, with no one around to hear it, makes a sound" might argue yes and no based on different definitions of "sound" but wouldn't actually expect anything different. Precision, therefore, is the order of the day and is achieved through expectations and sensory anticipation, rather than merely saying something is true and arguing through clever wordplay.

At their best, LessWrong's articles really do articulate important aspects of human cognition. Such ideas as the "ugh field"[17] — the instinctive distaste our minds have for difficult decisions — and the "affective death spiral"[18] — in which praise for an entity turns into an endless cycle of greater praise — are valuable insights, and anyone aspiring to make better decisions should read them. Similarly useful are such techniques as the rationalist taboo, which requires speakers to specify precisely what they mean instead of using more concise terms whose meanings might be vague, or "Crocker's Rules," in which a participant in a debate invites frank responses and declares that they will not take offense. All of these efforts that are aimed at promoting better thinking and better decisions, and are accordingly commendable.

[edit] Caution

When reading the Sequences, care should be taken to distinguish between fact and personal opinion of Yudkowsky himself. For example, the quantum physics sequence is a mishmash of a genuinely valuable intuitive explanation of the basic concepts of quantum physics, support for many-worlds (a mainstream, but by no means universally accepted interpretation of quantum mechanics) as if it was proven fact, wild speculation about the identity of consciousness and ill-defined concepts like "the Tao of Physics" (these are at least marked as speculative), and science fiction stories where the Yudkowsky stand-in debates with strawmen (a staple of most of his writing). An uncritical reader is likely to either get even more confused about quantum physics, despite Yudkowsky's promise to explain it in non-confusing terms, or worse, swallow Yudkowsky's personal, fringe views alongside mainstream science, forming a bond between the two in their worldview.

It is important to remember: the good bits are not original and the original bits are not good — the cognitive biases are idea-for-idea straight out of Kahneman, and the quantum physics sequence not only makes actual physicists and chemists throw things at walls, it builds to an essay arguing that you should use Yudkowsky's version of Bayesianism rather than empirical science. Take everything with a grain of salt, and read the comments, which frequently carry detailed refutations (often downvoted).

[edit] Culture

LessWrong's culture resembles, in most other respects, the standard set of predominately male, middle-class Internet-libertarians[19] so familiar in other places — including cringe-inducing discussions of the merits of racism.[20] Notably, though, members of LessWrong are unusually concerned and active in charitable giving.[21] They are also laudable for prizing accurate thinking over their personal viewpoints: it is not uncommon to witness someone actually change their mind when confronted with a good argument, a rarer phenomenon than one might think.

The inner reaches of LessWrong can get a little ... insular. Here's Michael Vassar, former President of MIRI and an active member of the Bay Area transhumanist community, talking to Harper's in 2014:[22]

It was getting late. I asked him about the rationalist community. Were they really going to save the world? From what?

“Imagine there is a set of skills,” he said. “There is a myth that they are possessed by the whole population, and there is a cynical myth that they’re possessed by 10 percent of the population. They’ve actually been wiped out in all but about one person in three thousand.” It is important, Vassar said, that his people, “the fragments of the world,” lead the way during “the fairly predictable, fairly total cultural transition that will predictably take place between 2020 and 2035 or so.”

Since about 2014, Yudkowsky himself has stopped participating in the site because it isn't fun for him any more. Many others have followed suit. Yudkowsky and many other current and ex-LessWrongers now form what is colloquially referred to as the LessWrong Diaspora, in their own blogs, on Tumblr[23] and on LessWrong user Scott Alexander's blog Slate Star Codex.[24]

[edit] Finances

LessWrong is run under the umbrella of MIRI (formerly the Singularity Institute for Artificial Intelligence). Notable sponsors includes PayPal founder Peter Thiel, who has contributed five-figure sums for several years. Smaller contributions are also solicited and accepted.[25] A $118,000 loss due to theft was reported in 2009, and has not been recovered.[26] MIRI uses its funds to recompense its officers (Yudkowsky makes about $80,000 per annum) and fund research, although no articles were placed in any peer reviewed journals until 2012. In that year, two articles were placed in the "International Journal of Machine Consciousness"[27] and one in the "Journal of Consciousness Studies",[28] both of which compare unfavorably to more prestigious journals in the field, such as "Artificial Intelligence."[29] Requests for funding are regularly made on the LessWrong website and at conferences, under the justification that MIRI is, uniquely, working to save the world from a threat to its very existence. Yudkowsky is happy to solicit financial contributions even from non-wealthy individuals, in order to "save" them from the speculative scenario of runaway Artificial Intelligence.

[edit] Criticism

LessWrong is mainly concerned with achieving accurate beliefs about the world, rather than achieving goals. The refusal to delve into contemporary politics or policy is held up as laudable, because it is seen as a way to preserve objective rationality.[30] One of the most-cited and most popular phrases is "politics is the mind-killer," derived from the Yudkowsky essay of the same name,[31] which argues that real discussion never occurs in a political context, because "winning" the discussion for your "side" becomes paramount, rather than reaching an optimal decision.[32] While logical to the extent that this is an accurate criticism of political discourse, it's also essentially a declaration of surrender: "It's hard to stay rational in politics, so we'll just give up." If members of LessWrong truly are less biased in their thinking than the general public, as they've argued,[33] then the more they succeed in drawing people into the fold, the more they may cede the field to the irrational. This also leads to a preponderance of (a) Silicon Valley libertarianism (the default view of the participants) as the assumed neutral "not-politics" (b) open slather for neoreactionaries, who largely incubated on OB/LW.

The site has been accused of being a personality cult of Eliezer Yudkowsky, and does not reflect the other essayists who have become almost as influential. Cultishness is heavily discussed on the site, both by Yudkowsky and others.[34] Amusingly enough, this led to some search engines suggesting "cult" as a related term to "Less Wrong" ... in response to which, some users started using the code-word "phyg"[35] to mean "cult".[36]

While the appearance of a cult has faded, the like-mindedness that led to the criticism has not. LessWrong has a very deep but narrow set of demographics that have only slightly improved over the years[37] — the same problem common to academic psychology and known as being "WEIRD": "Western, Educated, Industrialized, Rich, and Democratic" or also "White, Educated, Intelligent, Rich, and Democratic."[38]

Another problem of LessWrong is that its isolationism represents a self-made problem (unlike demographics). Despite intense philosophical speculation, the users tend towards a proud contempt of mainstream and ancient philosophy[39] and this then leads to them having to re-invent the wheel. When this tendency is coupled with the metaphors and parables that are central to LessWrong's attraction, it explains why they invent new terms for already existing concepts.[40] The compatibilism position on free will/determinism is called "requiredism"[41] on LessWrong, for example, and the continuum fallacy is relabeled "the fallacy of gray." The end result is a Seinfeldesque series of superfluous neologisms.

[edit] GiveWell assessment of SIAI

GiveWell's 2012 evaluation of SIAI as a charity[8] sets out a number of problems with the organisation, the site and its aims as compared to its verifiable results:

  • SIAI's argument for its work is poorly advanced;
  • Its arguments involve huge consequences of small probabilities (Pascal's muggings);
  • The artificial intelligence propositions advanced do not engage mainstream research and are not endorsed by active researchers in the field;
  • The group shies away from putting its claims to tests;
  • Apparent poorly grounded belief in the group's superior general rationality (the point of LessWrong)
  • Overall disconnect between SI's goals and its activities.

GiveWell's recommendation was that at this stage (2012), donating to SIAI would work against the organisation's stated goals, which is approximately the worst thing you could say about a charity.

[edit] Roko's Basilisk

See the main article on this topic: Roko's basilisk
DON'T BLINK!

The most prominent criticism to be made of LessWrong involves the incident of Roko's Basilisk. The absurdities involved beggar belief.

Yudkowsky has long been interested in the idea that you should act as if your decisions were able to determine the behavior of causally separated simulations of you:[42] if you can plausibly forecast a past or future agent simulating you, and then take actions in the present because of this prediction, then you "determined" the agent's prediction of you, in some sense.

The idea is that your decision, the decision of a simulation of you, and any prediction of your decision, have the same cause: An abstract computation that is being carried out. Just like a calculator, and any copy of it, can be predicted to output the same answer, given the same input. The calculators output, and the output of its copy, are indirectly linked by this abstract computation. Timeless Decision Theory says that, rather than acting like you are determining your individual decision, you should act like you are determining the output of that abstract computation.

This sort of thinking gets odd when you imagine superintelligences, because of all the extremes involved: their predictions of human behaviour may be near-perfect, as in Newcomb's paradox[wp], their power may be near-infinite, and the consequences could be near-eternal. Yudkowsky has also advocated utilitarianism, saying that it would be justified to torture one person for 50 years to prevent dust specks in the eyes of sufficiently large numbers of people.[43]

In July of 2010, Roko (a top contributor at the time) wondered if a future Friendly AI would punish people who didn't do everything in their power to further the AI research from which this AI originated, by at the very least donating all they have to it. He reasoned that every day without AI, bad things happen (150,000+ people die every day, war is fought, millions go hungry) and a future Friendly AI would want to prevent this, so it might punish those who understood the importance of donating but didn't donate all they could. He then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish them. That final thought proved too much for some LessWrong readers, who then had nightmares about being tortured for not donating enough to SIAI.

Yudkowsky replied to Roko's post calling him names, claiming that posting such things on an Internet forum "potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us", and that users had told him of nightmares prompted by the post. Four hours later, he deleted Roko's post,[44] including all comments. Roko left LessWrong, deleting his thousands of posts and comments.[45] (He later briefly returned [46] and posted among other things that "I agree that the post in question should not appear in public"[47] and "I wish I had never learned about any of these ideas"[48].)

Yudkowsky later claimed the basilisk would not in fact work the way Roko had posited; but rather than simply explaining how such a reaction was inappropriate or how the ideas underlying it were flawed, he instead attempted to censor all discussion of the topic. The matter is now the occasional subject of contorted LW posts, as people try to discuss the issue without talking about what they're talking about,[49][50] and is a reliable space-filler for journalists covering LW-related stories.[51]

The basilisk kerfuffle has also alienated fellow cryonicists.[52][53]

[edit] The rational way to deal with critics

One knowledgeable critic of LessWrong had started out buying into their claims, then was seriously put off by the response to Roko's Basilisk and established a popular reference site on the topic, had various LW members following him around the net to harass him, calling him a liar wherever they found him. He posted to LessWrong saying that the ongoing flame war had been affecting his health, and asking how it could stop.[54]

Yudkowsky responded with the following, which is no doubt an entirely reasonable thing to ask someone to do:[55]

You can update by posting a header to all of your blog posts saying, "I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret." If that is how you feel and that is what you do, I will treat with you starting from scratch in any future endeavors. I've been stupid too, in my life. (If you then revert to pattern, you do not get a second second chance.)

Note that most of these blog posts don't even mention Yudkowsky, much less quote mine him. But still, he wants his own header on top of every single blog post.

On the plus side, other LessWrong members were polite about accept this olive branch.

[edit] In popular culture

  • Zendegi by Greg Egan (2010) features Nate Caplan, who wants to be uploaded. "My IQ is one hundred and sixty ... You can always reach me through my blog, Overpowering Falsehood dot com, the number one site for rational thinking about the future —" The novel also features the Benign Superintelligence Bootstrap Project, which persuades a billionaire to donate his fortune, hoping that the "being of truly God-like powers" will grant him immortality come the Singularity. He dies disappointed, and the Project "turn five billion dollars into nothing but padded salaries and empty verbiage." [56]
  • Elementary S3E4, "Bella" (2014), features an artificial intelligence and the theory that a member of the Existential Threat Research Association killed its creator to increase the credibility of their anti-AI message. Also includes discourse on "the terror of mortality" and some I-know-that-you-know-that-I-know discussions between Sherlock and his suspect.[57]

[edit] See also

[edit] External links

[edit] Footnotes

  1. SIAI's year 2000 990-EZ
  2. "About", Overcoming Bias
  3. "2012 Winter Fundraiser for the Singularity Institute", LessWrong
  4. SIAI's year 2009 990
  5. "important and interesting in proportion to how much it helps construct a Friendly AI"
  6. http://web.archive.org/web/20110621192259/http://singinst.org/achievements
  7. http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/
  8. 8.0 8.1 "Thoughts on the Singularity Institute," LessWrong
  9. http://sl4.org/
  10. http://www.extropy.org/emaillists.htm
  11. Michael Vassar, the former president of Singularity Institute, who stepped down in January to pursue his idea for a personalized medicine startup–later bringing on Mr. Mowshowitz and Ms. Vance–admitted the nonprofit had learned to hide some of its more radical ideas, emphasizing rationality instead. As Mr. Yudkowsky put it, “There are plenty of people out there who would be interested in cognitive science-based thinking skills who wouldn’t necessarily buy into the whole ‘save humanity’ thing.”
  12. "Survey Results," LessWrong
  13. "2011 Survey Results," LessWrong
  14. "What We Do," CFAR
  15. "The Genetic Fallacy", LessWrong
  16. "Jargon", Lesswrongwiki
  17. "Ugh Fields", LessWrong
  18. "Affective Death Spirals", LessWrong
  19. "2012 Survey Results", LessWrong
  20. Comments on "Rationality Quotes, November 2012", LessWrong
  21. "Giving What We Can: 80,000 Hours and Metacharity", LessWrong
  22. https://pdf.yt/d/-jQQX6XY9dU0LN4G/embed/?sparse=1&footer=0
  23. Rationalist Masterlist (yxoque, 9 June 2015)
  24. http://slatestarcodex.com
  25. I just donated to the SIAI
  26. [1]
  27. http://www.scimagojr.com/journalsearch.php?q=19900192593&tip=sid&clean=0
  28. http://www.scimagojr.com/journalsearch.php?q=15435&tip=sid&clean=0
  29. http://www.scimagojr.com/journalsearch.php?q=23675&tip=sid&clean=0
  30. FAQ
  31. "Politics Is the Mind-Killer", LessWrong
  32. "A Fable of Science and Politics", LessWrong
  33. "Participation in the LW Community Associated with Less Bias", LessWrong
  34. Every Cause Wants To Be A Cult
  35. The simple Caesar cipher widely known since the days of usenet as ROT13 transforms "cult" to "phyg" and back again.
  36. Our Phyg Is Not Exclusive Enough
  37. "Survey Results", LessWrong
  38. http://lesswrong.com/lw/17x/beware_of_weird_psychological_samples/
  39. "Train Philosophers with Pearl and Kahneman, Not Plato and Kant", LessWrong
  40. "Eliezer's Sequences and Mainstream Academia", LessWrong
  41. http://lesswrong.com/lw/r0/thou_art_physics/
  42. http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/
  43. http://lesswrong.com/lw/kn/torture_vs_dust_specks/uf7
  44. Though a google cache of it survived - please do not read if you are prone to nightmares or OCD and/or are worried about causing incalculable harm to yourself.
  45. http://lesswrong.com/user/Roko
  46. http://lesswrong.com/user/formallyknownasroko
  47. http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33w7
  48. http://lesswrong.com/lw/38u/best_career_models_for_doing_research/344l
  49. http://lesswrong.com/lw/39z/should_lw_have_a_public_censorship_policy/img
  50. http://lesswrong.com/lw/ds4/article_about_lw_faith_hope_and_singularity/73ao
  51. http://betabeat.com/2012/07/singularity-institute-less-wrong-peter-thiel-eliezer-yudkowsky-ray-kurzweil-harry-potter-methods-of-rationality/?show=all
  52. Mark Plus. A question about Less Wrong's Basilisk . The Life of Man Qua Man on Earth, by a Senior Cryonicist. 2012 October 31.
  53. Mark Plus. Re: A surprisingly positive response. New Cryonet. Yahoo! Groups. 2012 November 2.
  54. http://lesswrong.com/lw/lb3/breaking_the_vicious_cycle/img
  55. http://lesswrong.com/lw/lb3/breaking_the_vicious_cycle/bnrrimg
  56. http://gareth-rees.livejournal.com/31182.html
  57. http://buddy2blogger.blogspot.co.uk/2014/11/bella-elementary-artificial-intelligence-review-recap-summary-plot-spoilers-season-3-episode-4.html
Personal tools
Namespaces

Variants
Actions
Navigation
Community
Tools
support