Information icon.svg Results for the 2024 RationalWiki Moderator Election have now been posted. Thank you for participating in this election, and congratulations to the winners!
Bronze-level article

Eliezer Yudkowsky

From RationalWiki
Jump to navigation Jump to search
Yudkowsky's meat suit, 2006 (prior to uploading his consciousness into a quantum computer).
Carefully, correctly
LessWrong
Icon lesswrong.svg
Singularity blues
Before Bruce SchneierWikipedia goes to sleep, he scans his computer for uploaded copies of Eliezer Yudkowsky.
—Eliezer Yudkowsky Facts, LessWrong[1]

Eliezer S. Yudkowsky (1979–) is an American artificial intelligence (AI) researcher, blogger, shitty fan fiction writer, and autodidact exponent of specifically his Bayes-based human rationality. Yudkowsky cofounded and works at the Machine Intelligence Research Institute (formerly the Singularity Institute for Artificial Intelligence), a nonprofit organization that concerns itself with the concept known as the singularity.[2] Yudkowsky founded the blog LessWrong as a sister site and offshoot of Overcoming Bias. He began his blogging career with George Mason University economist Robin Hanson.

Being idealistic enough to want everyone (above a certain income bracket) to live forever, he also has an overweening interest in cryonics and is (probably) vulnerable to hemlock.

AI "research"[edit]

Find whatever you're best at; if that thing that you're best at is inventing new math[s] of artificial intelligence, then come work for the Singularity Institute. If the thing that you're best at is investment banking, then work for Wall Street and transfer as much money as your mind and will permit to the Singularity Institute where [it] will be used by other people.
—Eliezer Yudkowsky on how to save the human race.[3]

Yudkowsky believes he has identified a "big problem in AI research" in that we can't assume an AI would care about humans or ethics without our evolutionary history. Believing that artificial general intelligence (AGI) is imminent, Yudkowsky has taken it upon himself to create a Friendly AI (FAI). Such an AI won't kill us, inadvertently or otherwise.[4]

"AI foom," short for "recursively self-improving Artificial Intelligence engendered singularity," comes from these ideas:

  • An AI-based singularity will occur soon.
  • AIs and humans would want them to improve themselves.
  • Human-level or better AIs could not be imprisoned or threatened with having their plugs pulled because they would talk their way out of the situation. Interestingly, Yudkowsky claims to have tested this idea by role-playing the part of supersmart AI, with a lesser mortal playing the jailer. However — the reader may note the beginning of a pattern here — the transcripts are unpublished.
  • AIs will, at some point, gain virtually unlimited control over the physical world through nanotechnology.

His transhumanist philosopher and existential risk theorist friend Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford until it was dissolved in 2024,[5] wrote Superintelligence: Paths, Dangers, Strategies on the above ideas in 2014.[6] Bostrom's central thesis appears to be "you can't prove this won't happen, therefore you should worry about it." Bostrom credits Yudkowsky in the acknowledgments section and quotes from him throughout the book.[note 1]

As is evident from the dialogue between Luke Muehlhauser and Pei Wang,[7] Yudkowsky's (and thus LessWrong's) conception of AGI differs starkly from the mainstream scientific and practical research understanding of it.[7]

Yudkowsky believes an AGI will come from decision theory despite his examples being computationally intractable, or Bayesian logic that does not scale in real AI.[7]

Since these are both totally abstract mathematical functions, there is no reason an AGI based on them would share human values (not that one can base an AGI on them). By this reasoning, Yudkowsky sees the need to bolt on some sort of morality module which will somehow be immune from alteration, even though the "foom" scenario he fears so much is based on AGIs which can rewrite their own code. (Bostrom points out in his book that there are several situations in which an AI might have an incentive to alter its own final goals.) Thus Yudkowsky sees the need to "solve" ethics in some form that can be computerized — although ethics remains an open problem in philosophy after thousands of years. However, Plato, Aristotle and Kant just weren't as smart as Yudkowsky believes himself to be.

In addition, some of Yudkowsky's critics identify more urgent threats from AI than unfriendliness: deliberate weaponization of AIs, coding errors, humans themselves, cyborg ethics, etc.[8]

Yudkowsky/Hanson debates[edit]

Economist Robin Hanson, Yudkowsky's then co-blogger, has debated Yudkowsky on "AI foom" on several occasions. Hanson, who had previously been an AI researcher for several years,[9] wrote a series of blog posts with Yudkowsky in 2008 and debated him in person in 2011. These debates were later collected and published as an ebook.[10] In 2014, after having read Bostrom's book Superintelligence, Hanson was sticking to his guns: he wrote another blog post entitled "I Still Don't Get Foom". He argued:

"Intelligence" just means an ability to do mental/calculation tasks, averaged over many tasks. I've always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I've found it hard to accept is a "local explosion". This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g. weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme.[11]

Immediately after this debate, Yudkowsky left Overcoming Bias (now Hanson's blog) and moved the Sequences to LessWrong. They still weren't seeing eye to eye in 2016 after the famous GoWikipedia match between Google's Alpha Go software and Lee Sedol.[9] Hanson said he thought general-purpose AI was like nation-states, requiring a lot of workers and idea-sharing. Limited special-purpose AI would be like product teams in businesses. While he gave a criterion to test his theory, Yudkowsky said he thought the world would already have "ended" before there was a chance to verify it. He claimed different AI labs were doing AI training, and there was almost no exchange of pre-trained modules or specialist components. (However, general AI components get widely reused between companies.)[12]

Yudkowsky had previously announced his belief that if, as we now know, Alpha Go beat the Go grandmaster, that would mean the singularity was getting closer. However, the previous occasions when computers beat top-ranked humans at a board game (checkers and chess) still left computers well below the intelligence of a human infant.

Singularity Institute / MIRI[edit]

In 1998, Eliezer Yudkowsky finished the first version of 'Coding a Transhuman AI', alternatively known as 'Notes on the Design of Self-Enhancing Intelligence'.[13] After that, he spent a few months writing design notes for a new programming language named Flare. Thereafter, he wrote 'Plan to Singularity',[14] presenting the idea of a Singularity Institute. In May 2000, at the meeting of Foresight Institute [15] the idea of establishing the institute got support from two others. Now, the Singularity Institute had a website.[16] Then he wrote a revised version of 'Coding a Transhuman AI'.[17]

On July 23, 2001, SI started the Flare project under the leadership of one Dimitry Myshkin but canceled the project in 2003.[18] Since the very start, Yudkowsky has made creating 'Friendly AI' crucial to himself. In 2001, he wrote a book-length work 'Creating a Friendly AI'.[4][note 2]

In 2004, Eliezer Yudkowsky got a "major insight" about Coherent Extrapolated Volition (CEV). A person might not always want what is best for themself, but Yudkowsky wants his AI to do good for people. He believes that to find what is best for a person, AI would scan the person's brain and do "something complicated". To ensure the AI is friendly to all humans, it would do it to everyone. The sheer computing power needed is incomprehensibly large, and we have no idea what this would look like if implemented.[19]

Until 2008 or 2009, Singularity Institute was practically a one-man show. All its publications bore the Yudkowksy's name, and there was no other full-time employee of SI. However, change came in 2008 with the emergence of LessWrong. Soon, new publications by new authors like Carl Shulman, N Tarleton, and after some time, Luke Muelhauser began to arrive. And this became the norm, with newly inducted researchers writing most papers.

The new focus of research at SI became decision theory. In 2010, Eliezer Yudkowsky published Timeless Decision Theory.[20]

Yudkowsky's collaboration with Bostrom's Future of Humanity Institute became visible by this time. Multiple joint articles by SI researchers and FHI researchers such as A. Sandberg,[21] Stuart Armstrong, and the like appeared. A joint article by Yudkowsky and Bostrom appeared in 2011.[22] Bostrom's 2014 book Superintelligence refers to Yudkowsky 15 times. There were also nearly five pages on Coherent Extrapolated Volition.[23] In 2009, Jeffery Epstein donated $50,000 to SIAI.[24]

Other projects[edit]

Fan fiction[edit]

Oh god. The future of the human species is in the hands of a guy who writes crossover fan-fiction.
Scott Alexander[25]

For a slightly more humorous but insightful look into his mind, Yudkowsky has written Harry Potter fanfiction to teach rationality to a larger audience. In the story, a rationalist Harry tries to get his head around magic[26] while lecturing science to his elders and audience.

Harry Potter and the Methods of Rationality (HPMoR) is widely acclaimed by the standards of fanfiction (such as they are), with many readers hailing it as one of the best of its kind.[27] Yudkowsky and his fans tried to spam it for the 2016 Hugo Awards for science fiction and fantasy,[28][29][30][31] though it didn't make the shortlist. The podcast version was a Parsec Awards finalist in 2012[32] and 2015.[33]

Some readers felt disillusioned by the sermonising.[34] As Yudkowsky himself notes in a Facebook posting:[35]

People who have this [status regulating] emotion leave angry reviews on Chapter 6 of HPMOR where they complain about how much Harry is disrespecting Professor McGonagall and they can't read this story.

LessWrong may have developed a personality cult around Yudkowsky, with a list of Eliezer Yudkowsky Facts,[1] but the fan fiction of him was apparently a play, produced without his knowledge.[36][37]

Arbital[edit]

After tiring of LessWrong and finishing his Harry Potter fanfic, Yudkowsky's next project was Arbital, an exciting new competitor in the educational market with a different approach to learning than that of Wikipedia.[38] He first proposed it in March 2015:[30]

First, I've designed an attempted successor to Wikipedia and/or Tumblr and/or peer review, and a friend of mine is working full-time on implementing it. If this project works at the upper percentiles of possible success (HPMOR-level 'gosh that sure worked', which happens to me no more often than a third of the time I try something), then it might help to directly address the core societal problems (he said vaguely with deliberate vagueness).

With $300k in funding, developers were hired, with Yudkowsky as an overseer. There were several changes in direction. It went live in early 2016 as the LessWrong Sequences in a different bottle. By late 2016, they identified as "the place for crowdsourced, intuitive math explanations." It had a chance of beating the notoriously opaque math articles on Wikipedia, except they couldn't find enough knowledgeable people to write them. As of early 2017, the front page was a Slashdot-style aggregator for people in the MIRI subculture. By the end of March, they admitted defeat[39] and turned the front page into a coming-soon for a blogging platform. A project developer lamented Yudkowsky's leadership style.

Also around that time, it became clear to us that things just weren't going well. The primary issue was that we completely relied on Eliezer to provide guidance to which features to implement and how to implement them. Frequently when we tried to do things our way, we were overruled. (Never without a decent reason, but I think in many of those cases either side had merit. Going with Eliezer's point of view meant we frequently ended up blocked on him, because we couldn't predict the next steps.) Also, since Eliezer was the only person seriously using the product, there wasn't enough rapid feedback for many of the features. And since we[sic] wasn't in the office with us every day, we were often blocked.

So, we decided to take the matter into our own hands. The three of us would decide what to do, and we would occasionally talk to Eliezer to get his input on specific things.[40]

Achievements[edit]

Why should I follow your Founder when he isn't an Eighth Level anything outside his own cult?
—Eliezer Yudkowsky[41]
Yudkowsky: Christ is Risen[42][43]

With no training in his field of interest, Yudkowsky has no accomplishments to his credit beyond getting Peter Thiel to give him money. (And moving on to Ethereum cryptocurrency programmers.)

Even his fans admit, "A recurring theme here seems to be 'grandiose plans, left unfinished'."[44] He claims to be a skilled computer programmer, but has no code available other than Flare, an unfinished computer language for AI programming with XML-based syntax.[45] It was supposed to be the programming language in which SIAI would code Seed AI but got canceled in 2003. He hired professional programmers to implement his Arbital project (see above), which eventually failed.

Yudkowsky is almost entirely unpublished outside his foundation and blogs[46] and never attended high school or college,[47] much less did any AI research. No samples of his AI coding are public. In 2017 and 2018 he self-published some pop science books with titles like Inadequate Equilibria and endorsements from fellow travelers Bryan CaplanWikipedia and Scott Aaronson.Wikipedia[48]

Yudkowsky's papers are generally self-published and had a total of two cites on JSTOR-archived journals (neither to do with AI) as of 2015. One of these came from his friend Nick Bostrom at the closely-associated Future of Humanity Institute.[49][50]

Yudkowsky's observable results in the real world are a fanfiction wildly acclaimed for standing out by the genre's standards, a pastiche erotic light novel,[51] a large pile of blog posts and self-published nonfiction, and a surprisingly well-funded research organization with fewer papers in a decade and a half than a single graduate student in the course of a physics Ph.D. The latter would be peer reviewed.

Genius or crank?[edit]

Yudkowsky refers to himself as a genius six times in his "autobiography" that he wrote in 2000 when he was 20.[52] To his credit, he has since disavowed it, considering his younger self an idiot. He also claims to have "randomly won the writing talent lottery".[53] He thinks most Ph.D.s are pretty ordinary compared to the people he hobnobs with.

The really striking fact about the researchers who show up at AGI conferences, is that they're so… I don't know how else to put it…

…ordinary.

Not at the intellectual level of the big mainstream names in Artificial Intelligence. Not at the level of John McCarthyWikipedia or Peter NorvigWikipedia (whom I've both met).

I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them — just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified.

However, he admits to possibly being less smart than John Conway.Wikipedia[54] As a homeschooled individual with no college degree, Yudkowsky may not be in an ideal position to estimate his own intelligence. For obvious reasons, many of his followers think he is a genius.[55][56][57][58] Similarly, some of his followers are derisive of renowned scientists. Just look for comments about "not smart outside the lab" and "for a celebrity scientist."[59] Yudkowsky believes a doctorate in AI is a net negative when it comes to Seed AI.[60] While Yudkowsky doesn't attack Einstein, he does indeed think the scientific method cannot handle things like the Many Worlds Interpretation or his view on Bayes' theorem.[61] LessWrong does indeed have its own unique jargon.[62]

Just so that the reader doesn't get the mistaken impression that Yudkowsky boasts about his intellect incessantly, here he is, boasting about how nice and good he is:[63]

I took a couple of years of effort to clean up the major emotions (ego gratification and so on), after which I was pretty much entirely altruistic in terms of raw motivations, although if you'd asked me I would have said something along the lines of: "Well, of course I'm still learning… there's still probably all this undiscovered stuff to clean up…"

Disagreement with Yudkowsky's ideas is often attributed to "undiscriminating skepticism". If you don't believe cryonics works, it's because you have watched Penn & Teller: Bullshit!.[64] It's just not a possibility that you don't believe it works because it has failed tests and is made improbable by the facts.[65]

Other controversial positions[edit]

Despite being viewed as the smartest two-legged being to ever walk this planet, Yudkowsky and much of the LessWrong community endorse several positions subject to bitter debate and controversy in their respective fields. For example, he idealistically believes in transhumanism and cryonics with hope and without willpower and preaches the "Many Worlds Interpretation" (MWI) of quantum physics as a "slam dunk" despite the lack of scientific consensus among quantum physicists and the belief by some that it is unfalsifiable.[66][67] Yudkowsky says a majority of theoretical physicists might not share his view, adding that attempted opinion polls conflict on this point.[68]

Concerning Bayes' theorem, Yudkowsky believes it can contradict and supersede the scientific method[69] and wants to apply Bayesian probability indiscriminately.[note 3]

As a partial list, Yudkowsky also claims non-computable results like Kolmogorov complexity are a reasonable basis for an entire epistemology, evolutionary psychology is well-established science, and philosophy is a subject so easy he can "re-invent it on the fly".[71]

Yudkowsky is a staunch total utilitarian,[72] leading him to questionable conclusions through questionable thought experiments. To him, too many people experiencing a speck of dust in their eyes for a moment could be worse than torturing a man for 50 years.[73] More specifically, Yudkowsky defends total utilitarian in this thought experiment.

Also, while his precise position on it is vague, he wrote a short sci-fi story where rape was briefly mentioned as legal.[74] How the character remarking on it didn't seem to be referring to consensual sex like we do today didn't prevent a massive reaction in the comments section. He responded, "The fact that it's taken over the comments is not as good as I hoped, but neither was the reaction as bad as I feared." He described the science fiction world he had in mind as a "Weirdtopia" rather than a dystopia.[74]

External links[edit]

Notes[edit]

  1. From this one may deduce that there are countless things that one you can't prove won't happen, therefore you should worry about all of them.
  2. This table shows the quantity of donation received in the years 2000-2009.
  3. While it is a powerful tool, Bayesian probability has its limitations like any other.[70]

References[edit]

  1. 1.0 1.1 Eliezer Yudkowsky Facts by steven0461 (March 22, 2009) LessWrong
  2. Our Team, Machine Intelligence Research Institute
  3. Transcription of Eliezer's January 2010 video Q&A (Nov 14, 2011) LessWrong
  4. 4.0 4.1 Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures by Eliezer Yudkowsky (2001) Machine Intelligence Research Institute, formerly Singularity Institute
  5. Oxford shuts down institute run by Elon Musk-backed philosopher by Nick Robins-Early (19 Apr 2024 18.46 EDT) The Guardian.
  6. Superintelligence: Paths, Dangers, Strategies' 'by Nick Bostrom (2014). Oxford University Press. ISBN 0198739834.
  7. 7.0 7.1 7.2 Muehlhauser-Wang Dialogue by lukeprog (April 22, 2012) LessWrong
  8. Q&A with experts on risks from AI #2 by XiXiDu (January 9, 2012) LessWrong
  9. 9.0 9.1 Facebook post by Eliezer Yudkowsky (March 11, 2016) Facebook
  10. Debate Is Now Book by Robin Hanson (September 4, 2013) Overcoming Bias
  11. I Still Don't Get Foom by Robin Hanson (July 24, 2014) Overcoming Bias
  12. Why TensorFlow will change the game for AI somatic blog
  13. Coding a Transhuman AI by Eliezer Yudkowsky (Published 1998; Archived February 1, 2001) sysopmind.com
  14. The Plan to Singularity by Eliezer Yudkowsky (1999; Archived February 9, 2001) sysopmind.com
  15. "Engines of Creation 2000 Confronting Singularity": Spring 2000 Senior Associate Gathering (Published 2000; Archived April 28, 2001) Foresight Institute
  16. singinst.org (Archived September 14, 2000) Singularity Institute, now Machine Intelligence Research Institute
  17. Coding a Transhuman AI 2.2.0 by Eliezer Yudkowsky (Published 2000; Archived February 2, 2001)
  18. Timeline of Machine Intelligence Research Institute, Timelines Wiki
  19. Coherent Extrapolated Volition by Eliezer Yudkowsky (2004) Machine Intelligence Research Institute, formerly Singularity Institute
  20. Timeless Decision Theory by Eliezer Yudkowsky (2010) Machine Intelligence Research Institute, formerly Singularity Institute
  21. Implications of a Software-Limited Singularity by Carl Shulman and Anders Sandberg, edited by Klaus Mainzer (2010) In ECAP10: VIII European Conference on Computing and Philosophy
  22. The Ethics of Artificial Intelligence by Nick Bostrom and Eliezer Yudkowsky (2011)
  23. Superintelligence by Nick Bostrum (2014) Oxford University Press, 1st ed. ISBN 978-0-19-967811-2
  24. Here's exactly how Jeffrey Epstein spent $30 million by Aaron Brezel (July 24, 2019) Miami Herald
  25. Yvain comments on The Finale of the Ultimate Meta Mega Crossover. Scott Alexander, LessWrong, 23 September 2009 10:08:17PM
  26. Harry Potter and the Methods of Rationality by Eliezer Yudkowsky (Published February 28, 2010; Last updated March 14, 2015) FanFiction.net
  27. Here's 10 Harry Potter Fanfics That You Should Read Right Now by Rhys McKay
  28. Hugo Awards - HP:MoR by Eneasz (January 21, 2011) LessWrong
  29. Hugo Awards - HP:MoR (part 2) by Eneasz (January 31, 2012) LessWrong
  30. 30.0 30.1 Author's Note 119: Shameless Blegging by Eliezer Yudkowsky (March 10, 2015) hpmor.com
  31. Tumblr post by asocratesgonemad (Published Jan 29, 2016 8:14 pm ; Archived January, 30 2016 16:04:19 UTC) Tumblr
  32. 2012 Parsec Awards Winners & Finalists
  33. 2015 Parsec Awards Winners & Finalists
  34. Why are people "put off by rationality"? by adamzerner (August 5, 2014) LessWrong
  35. Post by Eliezer Yudkowsky (March 18, 2014) Facebook
  36. Message by Eliezer Yudkowsky (Published July 4, 2007 ; Archived July 17, 2013) SL4
  37. Yudkowski Returns! reviewed by Robert Weinstein in: Pretentious Festival Reviews - Page 2 (Archived February 11, 2009) nytheatre.com
  38. Arbital: Solving online explanations by Eliezer Yudkowsky (Mar 3, 2016)
  39. What's up with Arbital? by Alexei (29th Mar 2017) LessWrong.
  40. Arbital postmortem by alexei (30th Jan 2018) LessWrong.
  41. To Spread Science, Keep It Secret by Eliezer Yudkowsky (27th Mar 2008) LessWrong.
  42. name=christrisen1
  43. name=christrisen2
  44. Discussion: Yudkowsky's actual accomplishments besides divulgation (Raw Power, LessWrong, 25 June 2011)
  45. Flare Programming Language, sourceforge.net.
  46. There is a conference proceeding: Eliezer Yudkowsky (2011). Complex Value Systems are Required to Realize Valuable Futures. In Proceedings of AGI 2011.
  47. 5 Minutes With a Visionary: Eliezer Yudkowsky by Gregory Saperstein (3:38 PM ET Thu, 9 Aug 2012) CNBC.
  48. Eliezer Yudkowsky, Inadequate Equilibria: Where and How Civilizations Get Stuck.
  49. How Law Changes the Environmental Mind: An Experimental Study of the Effect of Legal Norms on Moral Perceptions and Civic Enforcement by Yuval Feldman & Oren Perez (2009) Journal of Law and Society 36(4):501-535.
  50. Pascal's mugging by Nick Bostrom (2009) Analysis 69(3):443-445.
  51. A Girl Corrupted by the Internet is the Summoned Hero?!
  52. Eliezer by Eliezer S. Yudkowsky (2000) Sysop Mind (archived from February 5, 2001).
  53. Checking in with r/badphil's old friend Eliezer Yudkowsky by barkevious2 (c. 2017) Reddit.
  54. The Level above Mine by Eliezer Yudkowsky (26th Sep 2008) LessWrong.
  55. Eliezer Yudkowsky Facts: (Photoshopped version of this photo.) comment by ata (2009) Lesswrong.
  56. Slide from Yudkowsky speech by Mike Kuniavsky (May 14, 2006) Flickr.
  57. Yudkowsky as Christ Risen ImageShack (archived from December 28, 2011).
  58. The End (of Sequences) Anonymous 2009 comment with link to Photoshopped image of Yudkowsky as Christ Risen.
  59. Neil deGrasse Tyson on Cryonics by bekkerd (9th May 2012) LessWrong.
  60. So You Want To Be A Seed AI Programmer by Eliezer Yudkowsky (c. 2009) Accelerating Future (archived from January 16, 2010). "I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified."
  61. The Dilemma: Science or Bayes? by Eliezer Yudkowsky (13th May 2008) LessWrong.
  62. LessWrong Jargon LessWrong.
  63. Re: Ethical basics email from Eliezer S. Yudkowsky (Jan 23 2002 - 21:29:18 MST) Accelerating Future (archived from 21 Aug 2013 07:23:00 UTC).
  64. Undiscriminating Skepticism by Eliezer Yudkowsky (14th Mar 2010) LessWrong.
  65. Is Cryonics Feasible? by Stephen Barrett (September 2, 2005) Quackwatch (archived from June 17, 2017).
  66. The Correct Contrarian Cluster by Eliezer Yudkowsky (21st Dec 2009) LessWrong. "I can think of only three slam-dunks off the top of my head: Atheism: Yes. Many-worlds: Yes. P-zombies: No."
  67. Scientific method: Defend the integrity of physics by George Ellis & Joe Silk (2014) Nature 516:321–323. doi:10.1038/516321a.
  68. And the Winner is… Many-Worlds! by Eliezer Yudkowsky (11th Jun 2008) LessWrong.
  69. Eliezer Yudkowsky on Bayes and Science by Massimo Pigliucci (September 24, 2010) Rationally Speaking.
  70. Bayesian Epistemology by Hanti Lin (Jun 13, 2022) Stanford Encyclopedia of Philosophy.
  71. Eliezer Yudkowsky: Lesswrong Rationality and Mainstream Philosophy
  72. See the Wikipedia article on [[wp:{wpl|Average and total utilitarianism]].
  73. Less Wrong: Torture vs Dust Specks
  74. 74.0 74.1 Interlude with the Confessor (4/8) by Eliezer Yudkowsky (2nd Feb 2009) LessWrong.