Information icon.svg Results for the 2024 RationalWiki Moderator Election have now been posted. Thank you for participating in this election, and congratulations to the winners!

User:AD/LW

From RationalWiki
Jump to navigation Jump to search

More notes: [1]

Redundancies:

  • Privileging the hypothesis - prosecutor's fallacy
  • Fallacy of gray - continuum fallacy
  • Agreement with scholarship[2]
  • Transhumanist-focused, to the detriment of general application. Tie in with the shunning of practical usage of rationality (also: questionable effectiveness, c.f. 2012 survey with exhibition of very poor confidence bias, contra: analysis that suggests less bias[3]].[4][5][6][7]

LessWrong is a community blog focused on "refining the art of human rationality." To this end, it focuses on identifying and overcoming bias, improving judgment and problem-solving, and speculating about the future. The blog has long been dominated by the ideas of Eliezer Yudkowsky, a research fellow for the Singularity Institute for Artificial Intelligence; many members of LessWrong share Yudkowsky's interests in transhumanism, artificial intelligence, the Singularity, and cryonics.

The content of LessWrong is frequently articulate, innovative, and thoughtful. However, the community's focused demographic and narrow interests have also produced an insular culture that is heavy with its own peculiar jargon and established ideas.

History[edit]

In July of 2000, Eliezer Yudkowsky founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI) to "create a friendly, self-improving artificial intelligence."[1] In 2006, Yudkowksy began contributing to Overcoming Bias along with GMU economist Robin Hanson. After several years and increasing popularity, Yudkowksy started a collaborative blog/community to focus on some topics of particular interest to himself and SIAI, such as rationality, philosophy, AI, and transhumanism.[2] Overcoming Bias remains as a "sister blog" to LessWrong, where Hanson and others continue to discuss how human beings can compensate for natural biases.

SIAI, where Yudkowsky remains a research fellow, maintains LessWrong to provide "an introduction to issues of cognitive biases and rationality relevant for for careful thinking about optimal philanthropy and many of the problems that must be solved in advance of the creation of provably human-friendly powerful artificial intelligence."[3] Yudkowsky considers LessWrong useful insofar as it advances SIAI's work,[4] and the site is a key venue for SIAI recruitment[5] and fundraising.[6] The most popular post of all time on LessWrong, for example, is an assessment of SIAI by charity evaluator GiveWell.[7]

LessWrong originally attracted the bulk of its userbase from communities interested in transhumanism. In addition to Overcoming Bias, these communities include the SL4 mailing list[8] and the Extropians mailing lists[9] (dating back to the 1990s). Accordingly, and because of the influence and ownership of the SIAI, LessWrong has essentially been a transhumanist community, emphasizing a focus on rationality per se in order to attract those who might otherwise be skeptical of apocalyptic AI.[10] Increasingly since 2010, however, many of the newcomers to LessWrong have been introduced to the site from "Harry Potter and the Methods of Rationality,"[11][12] a lengthy and interesting work of Harry Potter fanfiction written by Yudkowksy. Community opinion on the future focus of LessWrong is uncertain, but discussions about rationality have led to the 2012 formation of the Center For Applied Rationality (CFAR). CFAR is devoted to researching methods of teaching rationalism, and holding retreats and summer camps to pass these methods on to others.[13]

Culture[edit]

The core of LessWrong are its many parables, metaphors, and explanations of concepts in psychology and philosophy. The popularity of such essays as Yudkowsky's "The Genetic Fallacy," which explains the eponymous concept clearly and works out some of its potential complications,[14] helped attract the growing community - even luring in those who might not be otherwise interested in transhumanism. While some critics have implied that this is perhaps some sort of deception, the peculiarly focused interests of LessWrong's most prominent members has no reflection on the usefulness of some of its great resources. In other words: just because Eliezer Yudkowsky wants to be a robot doesn't mean that his explanation of Bayes' Theorem isn't interesting and well-written.

Bayesianism, which uses that theorem to assess probabilities when making decisions, has become one of the hallmarks of LessWrong and its most frequent buzzwords. Most of this jargon[15] comes from the much-revered "Sequences," a long series of essays by Yudkowsky that are considered essential reading by members of the community. Their extraordinarily length can be prohibitive, however (surpassing J.R.R. Tolkien's Lord of the Rings). Such time would probably be better spent reading some of the books written by the actual researchers behind the Sequences' concepts, such as Thinking, Fast and Slow by Daniel Kahneman, The Black Swan by Nassim Taleb, or Thinking and Deciding by Jonathan Baron - all of which are just as easily readable. While this does not negate the value of the Sequences, particularly if you are interested in the transhumanist ends to which many of them turn (neither Kahneman, Taleb, nor Baron address the future problems of godlike AI) it does mean that the "required reading" status of the Sequences might be partially prompted by the same cognitive dissonance that helps perpetuate exclusive clubs: it was hard to read them all, but I did it, therefore they must be good.

LessWrong's culture resembles, in most other respects, the standard set of male Internet-libertarians[16] so familiar in other places - including cringe-inducing discussions of the merits of racism[17]. Notably, though, members of LessWrong are unusually concerned and active in charitable giving.[18] They are also laudable for prizing accurate thinking over their personal viewpoints: it is not uncommon to witness someone actually change their mind when confronted with a good argument, a rarer phenomenon than one might think.

The system on which LessWrong operates is based on the "karma"-based system of Reddit. A post or comment that is deemed insightful will be promoted and highly visible, whereas too many downvotes will hide it. Because of this, LessWrong is almost completely troll-free. While this constant evaluation can be intimidating, members of the community generally take pride in voting for good thinking: wary of groupthink, they will usually endorse even the harshest of criticism, as long as it is intelligent.

Although most posters don't consider Less Wrong to be "mainstream" philosophy, it has been compared to Wittgenstein who seems to best represent Yudkowsky and co's views on how language limits the ability for rationalists to communicate, and Quine,[19] whose approach to naturalism and science reflects the empiricism and reductionism of LW. Gary DrescherWikipedia's excellent-but-dense Good and Real[20] covers a lot of the same ground as the Sequences and came out around when the Sequences started; Yudkowsky had not read it before finishing them, but approves of the book.

A key part of the Less Wrong approach to human rationality is to avoid "fallacies of compression" and mistaking the map for the territory, which is the result of humans trying to fit a vastly huge universe into a relatively small and squishy piece of meat located between their ears. According to Yudkowsky, beliefs should constrain our expectations and those that are true no matter what we see are what constitutes blind faith - for example, two people arguing "whether a tree falling in a forest, with no one around to hear it, makes a sound" might argue yes and no based on different definitions of "sound" but wouldn't actually expect anything different. Precision, therefore, is the order of the day and is achieved through expectations and sensory anticipation, rather than merely saying something is true and arguing through clever wordplay.

At its best, LessWrong's articles really do articulate important aspects of human rationality. Such ideas as the "ugh field"[21] - the instinctive distaste our minds have for difficult decisions - and the "affective death spiral"[22] - in which praise for an entity turns into an endless cycle of greater praise - are valuable insights, and anyone aspiring to make better decisions should read them. Similarly useful are such techniques as the rationalist taboo, which requires speakers to discuss actual meaning rather than semantics, or "Crocker's Rules," which declare that the argument is on a touchy subject and that anyone who wishes to participate cannot claim offense. All of these efforts that are aimed at promoting better thinking and better decisions, and are accordingly commendable.

Criticism[edit]

A disengagement from the practical is another feature of LessWrong's culture, explicitly and strongly affirmed.[23] This refusal to delve into contemporary politics or policy is held up as laudable, because it is seen as a way to preserve objective rationality. One of the most-cited and most popular phrases is "politics is the mind-killer," derived from the Yudkowsky essay of the same name,[24] which argues that real discussion never occurs in a political context, because "winning" the discussion for your "side" becomes paramount, rather than reaching an optimal decision.[25] While logical to the extent that this is an accurate criticism of political discourse, it's also essentially a declaration of surrender: "It's hard to stay rational in politics, so we'll just give up." If members of LessWrong truly are less biased in their thinking that the general public, as they've argued,[26] then the more they succeed in drawing people into the fold, the more they will cede the field to the irrational.

  • Isolation (c.f. Pirsig, Objectivism); contra: fears of growth[8]
  • Basilisk

Footnotes[edit]

  1. SIAI's year 2000 990-EZ
  2. "About", Overcoming Bias
  3. SIAI's year 2009 990
  4. "important and interesting in proportion to how much it helps construct a Friendly AI"
  5. http://web.archive.org/web/20110621192259/http://singinst.org/achievements
  6. http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/
  7. "Thoughts on the Singularity Institute," LessWrong
  8. http://sl4.org/
  9. http://www.extropy.org/emaillists.htm
  10. Michael Vassar, the former president of Singularity Institute, who stepped down in January to pursue his idea for a personalized medicine startup–later bringing on Mr. Mowshowitz and Ms. Vance–admitted the nonprofit had learned to hide some of its more radical ideas, emphasizing rationality instead. As Mr. Yudkowsky put it, “There are plenty of people out there who would be interested in cognitive science-based thinking skills who wouldn’t necessarily buy into the whole ‘save humanity’ thing.”
  11. "Survey Results," LessWrong
  12. "2011 Survey Results," LessWrong
  13. "What We Do," CFAR
  14. "The Genetic Fallacy", LessWrong
  15. "Jargon", Lesswrongwiki
  16. "2012 Survey Results", LessWrong
  17. Comments on "Rationality Quotes, November 2012", LessWrong
  18. "Giving What We Can: 80,000 Hours and Metacharity", LessWrong
  19. Less Wrong Rationality and Mainstream Philosophy
  20. http://mitpress.mit.edu/catalog/item/default.asp?tid=10902&ttype=2
  21. "Ugh Fields", LessWrong
  22. "Affective Death Spirals", LessWrong
  23. FAQ, Lesswrongwiki
  24. "Politics Is the Mind-Killer", LessWrong
  25. "A Fable of Science and Politics", LessWrong
  26. "Participation in the LW Community Associated with Less Bias", LessWrong