Information icon.svg Results for the 2024 RationalWiki Moderator Election have now been posted. Thank you for participating in this election, and congratulations to the winners!
Silver-level article

LessWrong

From RationalWiki
(Redirected from Less wrong)
Jump to navigation Jump to search
The LessWrong logo, apparently stolen from NATO
Carefully, correctly
LessWrong
Icon lesswrong.svg
Singularity blues
Yeah, I was on LessWrong for quite a while, in a very low-key way. My period of time there basically went "These are people talking about interesting stuff. Admittedly they have a few odd beliefs like the cryonic thing, but interesting people." "…apart from this virulent racist who keeps talking about IQ…" "…and all these people who keep talking about being ‘Pick-Up Artists'…" "my God, this place needs to be burned down and the earth salted!"
—Andrew Hickey[1]

LessWrong is a community blog focused on "refining the art of human rationality." To this end, it focuses on identifying and overcoming bias, improving judgment and problem-solving, and speculating about the future. The blog is based on the ideas of Eliezer Yudkowsky, a research fellow for the Machine Intelligence Research Institute (MIRI; previously known as the Singularity Institute for Artificial Intelligence, and then the Singularity Institute). Many members of LessWrong share Yudkowsky's interests in transhumanism, artificial intelligence (AI), the Singularity, and cryonics.

LessWrong 2.0 was created in 2017 to reinvigorate the "failing website"[2] (that still exists). LessWrong 2.0 (a.k.a., LessWrong Team) took on tasks external to the LessWrong website. In 2021, it was reorganized as Lightcone Infrastructure.[2] Because simple is bad and reorganizing organizations frequently will fix everything.

LessWrong, MIRI, and Lightcone have been based in Berkeley, California.

History[edit]

Cover image of Harry Potter and the Methods of Rationality

In July of 2000, Yudkowsky founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI) to "create a friendly, self-improving artificial intelligence."[3] In 2006, Yudkowksy began contributing to Overcoming Bias[4] along with George Mason University economist Robin Hanson. After several years and increasing popularity, Yudkowksy had amassed a cult following and started a collaborative blog/community to focus on some topics of particular interest to himself and SIAI, such as rationality, philosophy, AI, and transhumanism.[5] Overcoming Bias remains as a "sister blog" to LessWrong, where Hanson and others discuss how human beings can compensate for natural biases (and ideas stemming from Hanson's speculations on economics).

MIRI, where Yudkowsky remains a research fellow, hosted and maintained LessWrong[6] to provide "an introduction to issues of cognitive biases and rationality relevant for careful thinking about optimal philanthropy and many of the problems that must be solved in advance of the creation of provably human-friendly powerful artificial intelligence."[7] Yudkowsky considers LessWrong useful insofar as it advances SIAI's work,[8] and the site is a key venue for SIAI recruitment[9] and fundraising.[10] The most popular post of all time on LessWrong, for example, is an assessment of SIAI by charity evaluator GiveWell.[11]

LessWrong originally attracted the bulk of its user base from communities interested in transhumanism. In addition to Overcoming Bias, these communities include the SL4 mailing list[12] and the Extropians mailing lists[13] (dating back to the 1990s). Accordingly, LessWrong has long been an essentially transhumanist community, emphasizing a focus on rationality per se in order to attract those who might otherwise be skeptical of apocalyptic AI.[14]

Since 2010, many of the newcomers to LessWrong have been introduced to the site from the e-book, Harry Potter and the Methods of Rationality,[15][16][17] a popular work of Harry Potter fanfiction written by Yudkowksy. The future focus of LessWrong is unclear, but discussions about rationality have led to the 2012 formation of the Center For Applied Rationality (CFAR).[18][19] CFAR is devoted to researching methods of teaching rationalism, and holding retreats and summer camps to pass these methods on to others.[20]

After 2013, Yudkowsky himself stopped participating much in the site because it isn't fun anymore (though he still comments occasionally). Many others have followed suit. Yudkowsky and many other current and ex-LessWrongers now form what is colloquially referred to as the LessWrong Diaspora, in their own blogs, on Tumblr[21] and on LessWrong user Scott Alexander's blog Slate Star Codex.[22] Alexander refers to the 2015 participants as "some kind of pack of unquiet spirits who have moved in to haunt it after it got abandoned by the founding community members."[23]

Content[edit]

The good[edit]

At their best, LessWrong's articles really do articulate important aspects of human cognition. The core of LessWrong are its many parables, metaphors, and explanations of concepts in psychology and philosophy. The popularity of such essays as Yudkowsky's "The Genetic Fallacy", which clearly explains the genetic fallacy and works out some of its potential complications,[24] helped attract the growing community — even luring in those who might not be otherwise interested in transhumanism. While some critics have implied that this is perhaps some sort of deception (along the lines of bait-and-switch), the peculiarly-focused interests of LessWrong's most prominent members has no reflection on the usefulness of some of its great resources (even if they're unacknowledged idea-for-idea popularisations of Daniel Kahneman).Wikipedia

A key part of the LessWrong approach to human rationality is to avoid "fallacies of compression" and mistaking the map for the territory, which is the result of humans trying to fit a vastly huge universe into a relatively small and squishy piece of meat located between their ears. According to Yudkowsky, beliefs should constrain our expectations and those that are true no matter what we see are what constitutes blind faith — for example, two people arguing "whether a tree falling in a forest, with no one around to hear it, makes a sound" might argue yes and no based on different definitions of "sound" but wouldn't actually expect anything different. Precision, therefore, is the order of the day and is achieved through expectations and sensory anticipation, rather than merely saying something is true and arguing through clever wordplay. Particularly useful ideas include:

  • The "ugh field"[25] — the instinctive distaste our minds have for difficult decisions
  • Arguments as soldiers — the instinct to defend any argument you've put forward, no matter how stupid
  • The "affective death spiral"[26] — in which praise for an entity turns into an endless cycle of greater praise
  • The rationalist taboo, which requires speakers to specify precisely what they mean instead of using more concise terms whose meanings might be vague
  • "Crocker's Rules", in which a participant in a debate invites frank responses and declares that they will not take offense. (In practice this is used by people who want to be free to give offense and reliably hit the roof when they're on the receiving end.) Lee Daniel Crocker credits Yudkowsky with actually coining the rule.[27]

All of these efforts that are aimed at promoting better thinking and better decisions, and are accordingly commendable.

In other words: just because Yudkowsky wants to live as a simulation in a computer forever doesn't mean that his explanation[28] of Bayes' Theorem isn't interesting and well-written. (Bayesianism, which uses that theorem to assess probabilities when making decisions, has become one of the hallmarks of LessWrong and its most frequent buzzword.)

The meh[edit]

LessWrong has an extensive jargon[29] that mostly comes from the much-revered "Sequences",[30] a long series of essays by Yudkowsky considered essential reading by members of the community. Their extraordinary length can be prohibitive, however (surpassing J.R.R. Tolkien's Lord of the Rings).[note 1]

Such time might be better spent reading some of the books written by the actual researchers behind the Sequences' concepts, such as The Fabric of Reality by David Deutsch,[33] Thinking, Fast and Slow by Daniel Kahneman,[34] The Black Swan by Nassim Nicholas Taleb,[35] or Thinking and Deciding by Jonathan Baron[36] — all of which are readable. This does not negate the value of the Sequences, particularly if one is interested in the transhumanist and physical realist ends to which many of them turn.

The "required reading" status of the Sequences might be partially prompted by the same cognitive dissonance that helps perpetuate exclusive clubs: "it was hard to read them all, but I did it, therefore they must be good."Wikipedia

The bad[edit]

The good bits are not original and the original bits are not good. The well-written explanations of cognitive biases are taken idea-for-idea from Kahneman. Take everything with a grain of salt, and read the comments, which frequently carry detailed refutations (often downvoted).

It also contains ill-defined concepts like "The Tao of Physics"[37] (which may or may not be exactly the same as Hume's principle of the uniformity of nature) and science fiction stories where the Yudkowsky stand-in debates with strawmen (a staple of most of his writing). An uncritical reader is likely to either get even more confused about the subject matter or, worse, swallow Yudkowsky's personal, fringe views alongside mainstream science, forming a bond between the two in their worldview.

Criticism[edit]

Lack of application[edit]

Imagine there was an online community that pretended they had your job.

Pretend there is a website of trans-accountants who have never had an accounting job nor had any education in accounting. They talk about accounting all the time but they make up words for it and misuse what few words they actually know. Everything they know about accounting they learned from movies and adventure novels with accountants. They talk about post-ledger accounting and they talk about maximizing your redline value returns.

Or people who pretend to manage video rental stores but have never even owned a television.
—Poe News[38]

LessWrong is mainly concerned — or claims to be concerned — with achieving accurate beliefs about the world, rather than achieving goals. The refusal to delve into contemporary politics or policy is held up as laudable, because it is seen as a way to preserve objective rationality.[39] One of the most-cited and most popular phrases is "politics is the mind-killer", derived from the Yudkowsky essay of the same name,[40] which argues that real discussion never occurs in a political context, because "winning" the discussion for your "side" becomes paramount, rather than reaching an optimal decision.[41] While logical to the extent that this is an accurate criticism of most of political discourse, it's also essentially a declaration of surrender: "It's hard to stay rational in politics, so we'll just give up." It also subverts LessWrong's forecasting fixation: how does one forecast the future of humanity without understanding the politics of humanity?

If members of LessWrong truly are less biased in their thinking than the general public, as they've argued,[42] then the more they succeed in drawing people into the fold, the more they may cede the field to the irrational. This also leads to a preponderance of (a) Silicon Valley libertarianism (the default political view of the participants) as the assumed neutral "not-politics" and (b) open slather for neoreactionaries, who largely incubated on OB/LW.

Cult?[edit]

Despite describing itself as a forum on “the art of human rationality,” the New York Less Wrong group, which holds weekly Tuesday meetups and boasts almost 300 people signed up for its mailing list, is fixated on a branch of futurism that would seem more at home in a 3D multiplex than a graduate seminar: the dire existential threat—or, with any luck, utopian promise—known as the technological Singularity.
—Nitasha Tiku[43]

LessWrong's culture resembles, in most other respects, the standard set of predominately male, middle-class internet-libertarians[17] so familiar in other places — including cringe-inducing discussions of the merits of racism, which the neoreactionaries took as a welcoming signal.[44] Notably, though, members of LessWrong are unusually concerned and active in charitable giving.[45] They are also laudable for prizing accurate thinking over their personal viewpoints: it is not uncommon to witness someone actually change their mind when confronted with a good argument, a rarer phenomenon than one might think.

The site has been accused of being a personality cult of Eliezer Yudkowsky, and does not reflect the other essayists who have become almost as influential. Cultishness is heavily discussed on the site, both by Yudkowsky and others.[46] Amusingly enough, this led to some search engines suggesting "cult" as a related term to "Less Wrong" … in response to which, some users started using the code-word "phyg"[note 2] to mean "cult".[47]

While the appearance of a cult has faded, the like-mindedness that led to the criticism has not. LessWrong has a very deep but narrow set of demographics that have only slightly improved over the years[48] — the same problem common to academic psychology and known as being "WEIRD": "Western, Educated, Industrialized, Rich, and Democratic" or also "White, Educated, Intelligent, Rich, and Democratic."[49] Indeed, the site was strongly concerned about the impacts of growth.[50]

Contrarianism[edit]

Another problem of LessWrong is that its isolationism represents a self-made problem (unlike demographics). Despite intense philosophical speculation, the users tend towards a proud contempt of mainstream and ancient philosophy[51] and this then leads to them having to re-invent the wheel. When this tendency is coupled with the metaphors and parables that are central to LessWrong's attraction, it explains why they invent new terms for already existing concepts.[52] Yudkowsky takes the position of "requiredism" on free will/determinism which he believes to be different from the mainstream compatibilism.[note 3] on LessWrong, for example, and the continuum fallacy is relabeled "the fallacy of gray".[54] The end result is a Seinfeldesque series of superfluous neologisms.

Although most posters don't consider LessWrong to be "mainstream" philosophy, it has been compared to Wittgenstein who seems to best represent Yudkowsky and company's views on how language limits the ability for rationalists to communicate, and W.V.O. Quine,[55] whose approach to naturalism and science reflects the empiricism and reductionism of LessWrong. Gary Drescher'sWikipedia excellent-but-dense Good and Real: Demystifying Paradoxes from Physics to Ethics covers a lot of the same ground as the Sequences and came out around when the Sequences started;[56] Yudkowsky had not read it before finishing them, but approves of the book.

Roko's Basilisk[edit]

See the main article on this topic: Roko's basilisk
DON'T BLINK!
I don’t think their projects (which only seem to involve publishing papers and hosting conferences) have much chance of creating either Roko’s Basilisk or Eliezer’s Big Friendly God. But the combination of messianic ambitions, being convinced of your own infallibility, and a lot of cash never works out well, regardless of ideology, and I don’t expect Yudkowsky and his cohorts to be an exception.
—David Auerbach[57]

The most prominent criticism to be made of LessWrong involves the incident of Roko's Basilisk. The absurdities involved beggar belief.

Yudkowsky has long been interested in the idea that one should act as if one's decisions were able to determine the behavior of causally-separated simulations of oneself:[58] if one can plausibly forecast a past or future agent simulating oneself, and then take actions in the present because of this prediction, then you "determined" the agent's prediction of oneself, in some sense.

The idea is that your decision, the decision of a simulation of you, and any prediction of your decision, have the same cause: An abstract computation that is being carried out. Just like a calculator, and any copy of it, can be predicted to output the same answer, given the same input. The calculator's output, and the output of its copy, are indirectly linked by this abstract computation. Timeless Decision Theory says that, rather than acting like you are determining your individual decision, you should act like you are determining the output of that abstract computation.

This sort of thinking gets odd when one imagines superintelligences, because of all the extremes involved: their predictions of human behaviour may be near-perfect, as in Newcomb's paradox,Wikipedia their power may be near-infinite, and the consequences could be near-eternal. Yudkowsky has also advocated the aboslute worst form of utilitarianism, saying that it would be justified to torture one person for 50 years to prevent dust specks in the eyes of sufficiently large numbers of people (a ridiculously huge number that cannot be completely written in the universe).[59]

In July of 2010, Roko wondered if a future Friendly AI would punish people who didn't do everything in their power to further the AI research from which this AI originated, by at the very least donating all they have to it. He reasoned that every day without AI, bad things happen (150,000+ people die every day, war is fought, millions go hungry) and a future Friendly AI would want to prevent this, so it might punish those who understood the importance of donating but didn't donate all they could. He then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish them. That final thought proved too much for some LessWrong readers, who then had nightmares about being tortured for not donating enough to SIAI.

Yudkowsky replied to Roko's post calling him names, claiming that posting such things on an Internet forum "potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us", and that users had told him of nightmares prompted by the post. Four hours later, he deleted Roko's post,[60] including all comments. Roko left LessWrong, deleting his thousands of posts and comments.[61] (He later briefly returned[62] and posted among other things that "I agree that the post in question should not appear in public"[63] and "I wish I had never learned about any of these ideas".[63])

What you know: When Roko posted about the Basilisk, I very foolishly yelled at him, called him an idiot, and then deleted the post.

Why I did that is not something you have direct access to, and thus you should be careful about Making Stuff Up, especially when there are Internet trolls who are happy to tell you in a loud authoritative voice what I was thinking, despite having never passed anything even close to an Ideological Turing Test on Eliezer Yudkowsky.

Why I yelled at Roko: Because I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public Internet. In the course of yelling at Roko to explain why ths[sic] was a bad thing, I made the further error---keeping in mind that I had absolutely no idea that any of this would ever blow up the way it did, if I had I would obviously have kept my fingers quiescent — of not making it absolutely clear using lengthy disclaimers that my yelling did not mean that I believed Roko was right about CEV-based agents[64] torturing people who had heard about Roko's idea. It was obvious to me that no CEV-based agent would ever do that and equally obvious to me that the part about CEV was just a red herring; I more or less automatically pruned it from my processing of the suggestion and automatically generalized it to cover the entire class of similar scenarios and variants, variants which I considered obvious despite significant divergences (I forgot that other people were not professionals in the field). This class of all possible variants did strike me as potentially dangerous as a collective group, even though it did not occur to me that Roko's original scenario might be right — that was obviously wrong, so my brain automatically generalized it.

At this point we start to deal with a massive divergence between what I, and several other people on LessWrong, considered to be obvious common sense, and what other people did not consider to be obvious common sense, and the malicious interference of the Internet trolls at RationalWiki.

What I considered to be obvious common sense was that you did not spread potential information hazards because it would be a crappy thing to do to someone. The problem wasn't Roko's post itself, about CEV, being correct.[65]

Yudkowsky later claimed the basilisk would not in fact work the way Roko had posited; but rather than simply explaining how such a reaction was inappropriate or how the ideas underlying it were flawed, he instead attempted to censor all discussion of the topic. The matter is now the occasional subject of contorted LW posts, as people try to discuss the issue without talking about what they're talking about,[66][67] and is a reliable space-filler for journalists covering LessWrong-related stories.[14]

How Yudkowsky deals with critics[edit]

One knowledgeable critic of LessWrong said that several years of ongoing harassment concerning his controversial posts had been affecting his health.[68] Yudkowsky responded with the following, which is no doubt an entirely reasonable thing to ask someone to do:

You can update by posting a header to all of your blog posts saying, "I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret." If that is how you feel and that is what you do, I will treat with you starting from scratch in any future endeavors. I've been stupid too, in my life. (If you then revert to pattern, you do not get a second second chance.)[69]

That doesn't sound like cultish behavior at all, now does it?

Note that most of these blog posts don't even mention Yudkowsky, much less quote mine him. But still, he wants his own header on top of every single blog post.

This was eventually resolved by removing most of the dispute-related content, placing disclaimers where relevant, and letting the matter go.

Finances[edit]

LessWrong was run under the umbrella of MIRI (probably up until the 2017 reorganization).[70] Notable sponsors include PayPal founder and (2016 Donald Trump delegate) Peter Thiel, who contributed five-figure sums for several years. Many donations of late come from cryptocurrency winners; Ethereum founder Vitalik Buterin was the largest single donor in 2017.[71] Smaller contributions are also solicited and accepted.[72] A $118,000 loss due to theft was reported in 2009, and has not been recovered.[73] MIRI uses its funds to recompense its officers (Yudkowsky made $147,697/year in 2019[74]) and fund research, although no articles were placed in any peer reviewed journals until 2012. In that year, two articles were placed in the ridiculously-titled International Journal of Machine Consciousness[75] and one in the Journal of Consciousness Studies,[76] both of which compare unfavorably to more prestigious journals in the field, such as "Artificial Intelligence."[77] Requests for funding are regularly made on the LessWrong website and at conferences, under the justification that MIRI is, uniquely, working to save the world from a threat to its very existence. Yudkowsky is happy to solicit financial contributions even from non-wealthy individuals, in order to "save" them from the speculative scenario of runaway Artificial Intelligence.

Lighthaven[edit]

Yudkowsky (left) debating Destiny at the 2023 Lighthaven conference

In 2024, it was revealed that Lightcone had received $5 million from the collapse of Sam Bankman-Fried’s FTX cryptocurrency foundation. Due to the large-scale fraud involved in the collapse of FTX, Lightcone was summoned to a court appearance, and creditors are asking for the return of the funds that Lightcone received.[78] The property that Lightcone purchased in part with the FTX Foundation donation is called Lighthaven.[78] Lighthaven is a retreat/workshop/conference/event campus that hosts futurist-oriented events ("humanity's long-term trajectory").[79]

Another one of the fallouts of purchasing the Lighthaven campus is that, some of the events at best show poor judgment by Lightcone as to the nature of conferences. A forecasting conference called Manifest, run by the prediction marketplace Manifold held conferences there in September 22-24, 2023:[78][80] and June 7-9, 2024.[81] While forecasting is one of LessWrong's and Yudkowsky's favored subjects, the "special guests" and additional named attendees included some controversial people:[81][82]

Incidentally, Nate Silver,[note 4] Destiny, and Yudkowsky were special guests at both conferences. The presence of Yudkowsky at both conferences leave no doubt that Lightcone knew who was appearing and speaking. Regarding the controversial speakers, August Chen, co-founder of Manifold, claimed that "We did not invite them to give talks about race and IQ" and "Manifest has no specific views on eugenics or race & IQ."[78] According to The Guardian, the gathering of these diversely repugnant speakers is further evidence of the TESCREAL hypothesis.[78]

Please-ring-bell-for-psychic.jpeg

It is rather deliciously ironic that these two conferences of self-proclaimed longtermerist prognosticators could not see past their own hubris and predict the short-term consequences of the downfall of FTX whose donations were entangled with their own conference property. This lack of ability to foresee the near-term future while proclaiming superior ability to foresee the essentially unfalsifiable long-term future is essentially equivalent to the psychic who feels the need to place a "ring bell for psychic" sign on their door: what, the psychic doesn't already know that I'm here?

GiveWell assessment[edit]

GiveWell's 2012 evaluation of SIAI as a charity[11] sets out a number of problems with the organisation, the site and its aims as compared to its verifiable results:

  • SIAI's argument for its work is poorly advanced
  • Its arguments involve huge consequences of small probabilities (Pascal's muggingWikipedia)
  • The artificial intelligence propositions advanced do not engage mainstream research and are not endorsed by active researchers in the field
  • The group shies away from putting its claims to tests
  • Apparent poorly grounded belief in the group's superior general rationality (the point of LessWrong)
  • Overall disconnect between SI's goals and its activities

GiveWell's recommendation was that at this stage (2012), donating to SIAI would work against the organisation's stated goals, which is approximately the worst thing that one could say about a charity.

Holden Karnofsky of GiveWell (and its spawn, Open Philanthropy) has since quietly walked back his 2012 opinion,[85][86][87][88][89][90][91] and the effective altruism subculture is very into Lesswrong these days — because they've been driving the remaining sensible people out.[92]

In popular culture[edit]

  • Zendegi by Greg Egan (2010) features Nate Caplan, who wants to be uploaded. "My IQ is one hundred and sixty ... You can always reach me through my blog, Overpowering Falsehood dot com, the number one site for rational thinking about the future —" The novel also features the Benign Superintelligence Bootstrap Project, which persuades a billionaire to donate his fortune, hoping that the "being of truly God-like powers" will grant him immortality come the Singularity. He dies disappointed, and the Project "turn[s] five billion dollars into nothing but padded salaries and empty verbiage."[93][94]
  • Elementary S3E4, "Bella" (2014), features an artificial intelligence and the theory that a member of the Existential Threat Research Association killed its creator to increase the credibility of their anti-AI message. Also includes discourse on "the terror of mortality" and some I-know-that-you-know-that-I-know discussions between Sherlock and his suspect.[95]
  • The Basilisk Murders: A Sarah Turner Mystery by Andrew Hickey (2017). ISBN 1549999869. A satirical murder mystery set in the world of transhumanists and singularitarians.
  • Ex Machina,[96] a movie about a reclusive AI genius who resembles Yudkowsky physically, but not in his ability to actually build stuff.

See also[edit]

External links[edit]

Notes[edit]

  1. The Sequences was estimated to be about 630,000 words[31] vs. Tolkien's Lord of the Rings trilogy + The Hobbit at 576,459 words, or 482,058 words excluding The Hobbit.[32]
  2. The simple Caesar cipher widely known since the days of Usenet as ROT13 transforms "cult" to "phyg" and back again.
  3. Eliezer Yudkowsky, Thou Art Physics:

    “Compatibilism” is the philosophical position that “free will” can be intuitively and satisfyingly defined in such a way as to be compatible with deterministic physics. “Incompatibilism” is the position that free will and determinism are incompatible.
    My position might perhaps be called “Requiredism.” When agency, choice, control, and moral responsibility are cashed out in a sensible way, they require determinism — at least some patches of determinism within the universe.[53]

  4. Following the 2024 event, Silver joined the Thiel-sponsored, cryptocurrency-based Polymarket event forcasting marketplace.[83][84]

References[edit]

  1. Not a Review of Neoreaction a Basilisk (May 31, 2016; 9:30 am) Andrew Hickey (archived from December 18, 2018).
  2. 2.0 2.1 The LessWrong Team is now Lightcone Infrastructure, come work with us! by habryka (30th Sep 2021)
  3. SIAI's year 2000 990-EZ Internal Revenue Service via Singularity Institute (archived from December 22, 2012).
  4. Overcoming Bias
  5. About,Overcoming Bias.
  6. 2012 Winter Fundraiser for the Singularity Institute LessWrong.
  7. SIAI's year 2009 990 Internal Revenue Service via Singularity Institute (archived from September 10, 2012).
  8. "important and interesting in proportion to how much it helps construct a Friendly AI" by Eliezer Yudkowsky (2011) LessWrong.
  9. About Us: 2009 SIAI Accomplishments Singularity Institute (archived from June 21, 2011).
  10. Tallinn-Evans $125,000 Singularity Challenge by Kaj Sotala (26th Dec 2010) LessWrong.
  11. 11.0 11.1 Thoughts on the Singularity Institute by Holden Karnofsky (10th May 2012) LessWrong.
  12. SL4
  13. Email Lists, Communities and Forums Extropy Institute (archived from September 6, 2014).
  14. 14.0 14.1 Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set by Nitasha Tiku (25 July 2012) Beta Beat, The New York Observer (archived from February 2, 2013).

    Michael Vassar, the former president of Singularity Institute, who stepped down in January to pursue his idea for a personalized medicine startup–later bringing on Mr. Mowshowitz and Ms. Vance–admitted the nonprofit had learned to hide some of its more radical ideas, emphasizing rationality instead. As Mr. Yudkowsky put it, “There are plenty of people out there who would be interested in cognitive science-based thinking skills who wouldn’t necessarily buy into the whole 'save humanity' thing."

  15. HPMOR by Eliezer Yudkowsky
  16. Survey Results by Scott Alexander (12th May 2009) LessWrong.
  17. 17.0 17.1 2011 Survey Results by Scott Alexander (5th Dec 2011) LessWrong.
  18. Center For Applied Rationality: Developing clear thinking for the sake of humanity’s future Center For Applied Rationality.
  19. Welcome to CFAR! by Julia, Center for Applied Rationality (archived from 7 Jul 2013 13:28:00 UTC).
  20. What We Do Center for Applied Rationality (archived from 7 Jul 2013 13:28:05 UTC).
  21. Rationalist Masterlist by yxoque (Last updated September 1st 2014) Tumblr (archived from September 6, 2014).
  22. Slate Star Codex
  23. Star Slate Scratchpad (Aug 17th, 2015) Tumblr.
  24. The Genetic Fallacy by Eliezer Yudkowsky (10th Jul 2008) LessWrong.
  25. Ugh Fields by Roko (12th Apr 2010) LessWrong.
  26. Affective Death Spirals by Eliezer Yudkowsky (2nd Dec 2007) LessWrong.
  27. Talk:Lee Daniel Crocker (revision of 18 January 2011) Wikipedia.
  28. An Intuitive Explanation of Bayes’ Theorem (2003) Eliezer Yudkowsky.
  29. Jargon Lesswrongwiki.
  30. Sequences LessWrong.
  31. The Secret Society for Suppressing Stupidity Putanumonit.
  32. How many words are in LOTR books? by Vittorio Cobianchi and Ayesha Rasheed, Quora.
  33. The Fabric of Reality: the Science of Parallel Universes&nbs;— and Its Implications by David Deutsch< (1997). Penguin Books. ISBN 014027541X.
  34. Thinking, Fast and Slow by Daniel Kahneman (2011) Farrar, Straus and Giroux. ISBN 0374533555.
  35. The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb (2010) Random House. Second edition. ISBN 081297381X.
  36. Thinking and Deciding by Jonathan Baron (20230) Cambridge University Press. Fifth edition. ISBN 1009263641.
  37. Which Basis Is More Fundamental? by Eliezer Yudkowsky (23rd Apr 2008) LessWrong.
  38. I fucking hate these people so fucking much. Poe News (archived 22 July 2013).
  39. Is it OK to talk about politics? LessWrong Wiki.
  40. Politics Is the Mind-Killer by Eliezer Yudkowsky (18th Feb 2007) LessWrong.
  41. A Fable of Science and Politics by Eliezer Yudkowsky (22nd Dec 2006) LessWrong
  42. Participation in the LW Community Associated with Less Bias (9th Dec 2012) LessWrong.
  43. Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set by Nitasha Tiku (07/25/12 8:45am) New York Observer.
  44. Comments on "Rationality Quotes, November 2012", LessWrong
  45. "Giving What We Can: 80,000 Hours and Metacharity" (6th Nov 2012) LessWrong.
  46. Every Cause Wants To Be A Cult by Eliezer Yudkowsky (11th Dec 2007) LessWrong.
  47. Our Phyg Is Not Exclusive Enough (14th Apr 2012) LessWrong.
  48. Survey Results by Scott Alexander (12th May 2009) LessWrong.
  49. Beware of WEIRD psychological samples by Paul Crowley (13th Sep 2009) LessWrong.
  50. Poll - Is endless September a threat to LW and what should be done? by Epiphany (8th Dec 2012) LessWrong.
  51. Train Philosophers with Pearl and Kahneman, Not Plato and Kant by lukeprog (5th December 2012) LessWrong.
  52. Eliezer's Sequences and Mainstream Academia by lukeprog (14th September 2012) LessWrong.
  53. Thou Art Physics by Eliezer Yudkowsky (6th June 2008) LessWrong.
  54. The Fallacy of Gray by Yudkowsky Eliezer (7th January 2008) LessWrong.
  55. Rationality and Mainstream Philosophy by lukeprog (20th March 2011) LessWrong.
  56. Good and Real: Demystifying Paradoxes from Physics to Ethics by Gary Drescher (2006) MIT Press. ISBN 9780262042338.
  57. The Most Terrifying Thought Experiment of All Time: Why are techno-futurists so freaked out by Roko’s Basilisk? by David Auerbach (July 17, 20142:47 PM) Slate.
  58. Ingredients of Timeless Decision Theory by Eliezer Yudkowsky (18th Aug 2009) LessWrong.
  59. Torture vs. Dust Specks by Eliezer Yudkowsky (29th Oct 2007) LessWrong.
  60. Solutions to the Altruist's burden: the Quantum Billionaire Trick by Roko (23 July 2010 12:30PM) LessWrong (archived from a snapshot of Google's cache, 10 Sep 2012 12:16:00 UTC. Please do not read if you are prone to nightmares or OCD and/or are worried about causing incalculable harm to yourself.
  61. Roko LessWrong.
  62. FormallyknownasRoko LessWrong.
  63. 63.0 63.1 FormallyknownasRoko comments on Best career models for doing research? - Less Wrong by Comment author: FormallyknownasRoko (09 December 2010 10:17:05PM and 10 December 2010 05:06:28PM) LessWrong (archived from October 21, 2016).
  64. Coherent Extrapolated Volition LessWrong.

    Coherent Extrapolated Volition was a term developed by Eliezer Yudkowsky while discussing Friendly AI development. It’s meant as an argument that it would not be sufficient to explicitly program what we think our desires and motivations are into an AI, instead, we should find a way to program it in a way that it would act in our best interests – what we want it to do and not what we tell it to.

  65. Roko's Basilisk by Eliezer Yudkowsky (c. 2014) Reddit.
  66. Should LW have a public censorship policy? by Bongo (11th Dec 2010) LessWrong.
  67. Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set comment by Bruno Coelho (25th Jul 2012) LessWrong.
  68. Breaking the vicious cycle by XiXiDu, LessWrong, 23 November 2014.
  69. Breaking the vicious cycle comment by Eliezer Yudkowsky, LessWrong, 24 November 2014.
  70. LessWrong (archived from 14 Mar 2017 10:12:46 UTC). CFAR, MIRI, and Future of Humanity Institute were in the website homepage banner in March 2017.
  71. Fundraising success! by Malo Bourgon (January 10, 2018) MIRI.
  72. I just donated to the SIAI by Pavitra (15th Jul 2011) LessWrong.
  73. SIAI - An Examination by BrandonReinhart (2nd May 2011) LessWrong.
  74. Machine Intelligence Research Institute Inc ProPublica.
  75. International Journal of Machine Consciousness Scimago Journal & Country Rank.
  76. Journal of Consciousness Studies Scimago Journal & Country Rank.
  77. Artificial Intelligence Scimago Journal & Country Rank.
  78. 78.0 78.1 78.2 78.3 78.4 Sam Bankman-Fried funded a group with racist ties. FTX wants its $5m back: The Guardian reveals FTX trustees, in charge after the CEO’s downfall, allege payments were made with looted funds by Jason Wilson & Ali Winston (16 Jun 2024 06.00 EDT) The Guardian.
  79. Lighthaven
  80. Manifest 2023
  81. 81.0 81.1 Manifest 2024
  82. Special Guests Manifest Conference (archived from August 2, 2023).
  83. Polymarket Hires Nate Silver After Taking in $265M of Bets on U.S. Election: Report by Oliver Knight (Jul 17, 2024, 8:49 AM PDT) Yahoo! Finance.
  84. Peter Thiel Invests In Polymarket Political Betting Platform—But The Future Of Gambling On Elections Remains Unclear by Zachary Folk (May 14, 2024, 02:19pm EDT) Forbes.
  85. Alexander Berger is Now Sole CEO of Open Philanthropy (July 27, 2023) Open Philanthropy. Karnofsky was co-founder and co-CEO of Open Philanthropy until mid-2023.
  86. Center for Applied Rationality — General Support (July 2016) Open Philanthropy. Amount: $1,035,000.
  87. Machine Intelligence Research Institute — General Support (2016) (August 2016) Open Philanthropy. Amount: $500,000.
  88. Machine Intelligence Research Institute — General Support (2017) (October 2017) Open Philanthropy. Amount: $3,750,000.
  89. Machine Intelligence Research Institute — AI Safety Retraining Program (June 2018) Open Philanthropy. Amount: $150,000.
  90. Machine Intelligence Research Institute — General Support (2019) (February 2019) Open Philanthropy. Amount: $2,652,500.
  91. Machine Intelligence Research Institute — General Support (2020) (February 2020) Open Philanthropy. Amount: $7,703,750.
  92. I spent a weekend at Google talking with nerds about charity. I came away … worried. by Dylan Matthews (10 August 2015) Vox.
  93. Zendegi by Greg Egan review by Gareth Rees (2010-08-17) LiveJournal.
  94. Zendegi by Greg Egan (2010) Night Shade Books. ISBN 1597801755.
  95. Elementary Season 3 Episode 4 "Bella" — Recap and Review Buddy2Blogger.
  96. Ex Machina IMDb.