Eliezer Yudkowsky

From RationalWiki
Jump to: navigation, search
Yudkowsky in 2006. Prior to uploading his consciousness into a quantum computer.
Carefully, correctly

LessWrong

TrollRidingBask.jpg
Get rational
Before Bruce Schneier goes to sleep, he scans his computer for uploaded copies of Eliezer Yudkowsky.[1]

Eliezer Yudkowsky (b. 1979) is an AI researcher, blogger, and exponent of human rationality (specifically, his Bayes-based version of it). Yudkowsky cofounded and works at the Machine Intelligence Research Institute (formerly the Singularity Institute for Artificial Intelligence), a nonprofit organization that concerns itself with the concept known as the singularity.[2]

Yudkowsky also founded the blog community LessWrong as a sister site and offshoot of Overcoming Bias, where he began his blogging career with GMU economist Robin Hanson. Being a young man who wants everyone to live forever, he also has an overweening interest in cryonics, and is (probably) vulnerable to hemlock. As a self-described autodidact, he has no formal training in the subjects he writes about.

Contents

[edit] AI "research"

Yudkowsky.
Find whatever you’re best at; if that thing that you’re best at is inventing new math[s] of artificial intelligence, then come work for the Singularity Institute. If the thing that you’re best at is investment banking, then work for Wall Street and transfer as much money as your mind and will permit to the Singularity Institute where [it] will be used by other people.
—Eliezer Yudkowsky on how to save the human race.[3]

Yudkowsky believes he has identified a "big problem in AI research" which no one else had previously noticed. The problem is there is no reason to assume an AI would give a damn about humans or what we care about in any way at all, given that it won't have a million years as a savannah ape or a billion years of evolution in its makeup to build up any morality or aversion to killing us. He also believes AI is imminent. As such, working out how to create a Friendly AI (FAI) — one that won't kill us, inadvertently or otherwise — is the Big Problem he has taken as his own to solve.[4]

The problem scenario is known as "AI foom," which is short for "recursively self-improving Artificial Intelligence engendered singularity." It is based on the ideas that:

  • Despite glacial progress in AI so far, the AI-based Singularity will occur soon. Trust us!
  • AIs would want to improve themselves (and we want them to).
  • Human-level or better AIs could not be imprisoned or threatened with having their plugs pulled, because they would be able to talk their way out of the situation. Interestingly, Yudkowsky claims to have tested this idea by role-playing the part of supersmart AI, with a lesser mortal playing the jailer. However — the reader may note the beginning of a pattern here — the transcripts have never been published.
  • AIs will also be able to gain virtually unlimited control over the physical world through nanotechnology.

As is evident from the dialogue between Luke Muehlhauser and Pei Wang,[5] Yudkowsky's (and thus LessWrong's) conception and understanding of Artificial General Intelligence differs starkly from the mainstream scientific understanding of it. Yudkowsky believes an AGI will be based on some form of decision theory (although all his examples of decision theories are computationally intractable) and/or implement some form of Bayesian logic (same problem again).

Since these are both totally abstract mathematical functions, there is no reason an AGI based on them would share human values (not that you can base an AGI on them). By this reasoning, Yudkowsky sees the need to bolt on some sort of morality module which will somehow be immune from alteration, even though the "foom" scenario he fears so much is based on AGIs which can rewrite their own code. Thus Yudkowsky sees the need to "solve" ethics in some form that can be computerized — although ethics remains an open problem in philosophy after thousands of years. However, Plato, Aristotle and Kant just weren't as smart as Yudkowsky believes himself to be.

Anyways, actual AI research isn't based on implementing unimplementable functions; it is based on trying to reproduce known examples of natural intelligence, especially human intelligence, in all its messy glory; and it is based on learning. The real-world approach gives two separate routes to friendliness for free. One is that by reproducing a human mind, and not just an abstract mathematical function, you may reproduce human-style moral intuitions. The other is that if an AGI learns from human mentors (like HAL in 2001) it will pick up human ethics from them.

In addition, some of Yudkowsky's critics identify more urgent threats than unfriendliness: deliberate weaponisation of AIs, coding errors, humans themselves, cyborgisation,[6] etc.

[edit] Achievements

Yudkowsky is almost entirely unpublished outside of his own foundation and blogs[7] and never finished high school, much less did any actual AI research. No samples of his AI coding have been made public, probably to prevent someone from releasing his monster AI.

It is important to note that, as well as no training in his claimed field, Yudkowsky has pretty much no accomplishments of any sort to his credit beyond getting Peter Thiel to give him money. Even his fans admit "A recurring theme here seems to be 'grandiose plans, left unfinished'."[8] He claims to be a skilled computer programmer, but has no code available other than Flare, an unfinished computer language for AI programming with XML-based syntax.[9] His papers are generally self-published and have a total of two cites on JSTOR-archived journals (neither to do with AI) as of 2015, one of which is from his friend Nick Bostrom at the closely-associated Future of Humanity Institute.[10]

His actual, observable results in the real world are a popular fan fiction (which to his credit he did in fact finish, unusually for the genre), a large pile of blog posts and a surprisingly well-funded charity (that has produced fewer papers in a decade and a half than a single graduate student produces in the course of a physics Ph.D, and the latter's would be peer-reviewed).

[edit] Genius or crank?

Whether Yudkowsky considers himself a genius is unclear totally clear; he refers to himself as a genius six times in his "autobiography." However he admits to possibly being less smart than John Conway.[wp][11] As a homeschooled individual with no college degree, Yudkowsky may not be in an ideal position to estimate his own smartness. That many of his followers think he is a genius is an understatement.[12][13] Similarly, some of his followers are derisive of mainstream scientists, just look for comments about "not smart outside the lab" and "for a celebrity scientist."[14] Yudkowsky believes that a doctorate in AI is a net negative when it comes to Seed AI.[15] While Yudkowsky doesn't attack Einstein, he does indeed think the scientific method cannot handle things like the Many worlds Interpretation as well as his view on Bayes' theorem.[16] LessWrong does indeed have its unique jargon.[17]

Disagreement with Yudkowsky's ideas is often attributed to "undiscriminating skepticism." If you don't believe cryonics works, it's because you have watched Penn and Teller.[18] It's just not a possibility that you don't believe it works because it has failed tests and is made improbable by the facts.[19]

[edit] More controversial positions

Despite being viewed as the smartest two-legged being to ever walk this planet on LessWrong, Yudkowsky (and by consequence much of the LessWrong community) endorses positions as TruthTM that are actually controversial in their respective fields. Below is a partial list:

  • Transhumanism is correct. Cryonics might someday work. The Singularity is near![citation NOT needed]
  • Bayes' theorem and the scientific method don't always lead to the same conclusions (and therefore Bayes is better than science).[20]
  • Bayesian probability can be applied indiscriminately.[21]
  • Non-computable results, such as Kolmogorov complexity, are totally a reasonable basis for the entire epistemology. Solomonoff, baby!
  • Many Worlds Interpretation (MWI) of quantum physics is correct (a "slam dunk"), despite the lack of consensus among quantum physicists.[22]
  • Evolutionary psychology is well-established science.
  • Utilitarianism is a correct theory of morality. In particular, he proposes a framework by which an extremely, extremely huge number of people experiencing a speck of dust in their eyes for a moment could be worse than a man being tortured for 50 years.[23]

Also, while it is not very clear what his actual position is on this, he wrote a short sci-fi story where rape was briefly mentioned as legal.[24] That the character remarking on it didn't seem to be referring to consensual sex in the same way we do today didn't prevent a massive reaction in the comments section. He responded "The fact that it's taken over the comments is not as good as I hoped, but neither was the reaction as bad as I feared." He described the science fiction world he had in mind as a "Weirdtopia" rather than a dystopia.[25]

[edit] Fan fiction

Oh god. The future of the human species is in the hands of a guy who writes crossover fan-fiction.[26]

For a slightly more humorous but still fairly insightful look into his mind, you can read his Harry Potter fan fiction, where a rationalist Harry (whom he has altered into a total Mary Sue author avatar) tries to get his head around magic.[27] This is expressly intended as propaganda for rationalism.

It's a cracking good read for fanfic and is very highly rated on FanFiction.net. Yudkowsky and his fans believe it is Hugo-quality material[28][29] and Yudkowsky has asked his fans to spam the nominations.[30] Others are more critical;[31] and it was slightly hilarious when someone posted to LessWrong a fairly normal literary analysis of the main character as a narcissist, apparently without realising to what degree he was a self-insert.[32]

LessWrong may have developed a little eeny weeny tiny bit of a personality cult around Yudkowsky, with a list of Eliezer Yudkowsky Facts,[1] but the fan fiction of him was apparently a play, produced without his knowledge.[33][34]

[edit] See also

[edit] External links

[edit] Footnotes

  1. 1.0 1.1 Eliezer Yudkowsky Facts
  2. Yudkowsky and friends at the Institute
  3. [1]
  4. Yudkowsky's opus, Creating Friendly AI
  5. http://lesswrong.com/lw/bxr/muehlhauserwang_dialogue/
  6. [2]
  7. There is a converence proceeding: Eliezer Yudkowsky (2011). Complex Value Systems are Required to Realize Valuable Futures. In Proceedings of AGI 2011.
  8. Discussion: Yudkowsky's actual accomplishments besides divulgation (Raw Power, LessWrong, 25 June 2011)
  9. http://flarelang.sourceforge.net/
  10. Yuval Feldman and Oren Perez. "How Law Changes the Environmental Mind: An Experimental Study of the Effect of Legal Norms on Moral Perceptions and Civic Enforcement." Journal of Law and Society Vol. 36, No. 4 (Dec., 2009), pp. 501-535; Nick Bostrom. "Pascal's mugging." Analysis Vol. 69, No. 3 (July 2009), pp. 443-445
  11. The Level above Mine
  12. http://lesswrong.com/lw/4g/eliezer_yudkowsky_facts/1qhd?c=1
  13. http://lesswrong.com/lw/d6/the_end_of_sequences/aad?c=1
  14. http://lesswrong.com/lw/8f4/neil_degrasse_tyson_on_cryonics/
  15. "I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified."
  16. http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/
  17. http://wiki.lesswrong.com/wiki/Jargon
  18. We don't need P&T to prove it wrong.
  19. Is Cryonics Feasible?, Quackwatch
  20. Eliezer Yudkowsky on Bayes and Science, Massimo Pigliucci, Rationally Speaking
  21. While it is a powerful tool, Bayesian probability has its limitations like any other. See Stanford Encyclopedia of Philosophy's entry for Potential Problems with Bayesian Epistemology
  22. " I can think of only three slam-dunks off the top of my head: Atheism: Yes. Many-worlds: Yes. P-zombies: No. "
  23. Less Wrong: Torture vs Dust Specks
  24. [3]
  25. [4]
  26. [5] Yvain, LessWrong, 23 September 2009 10:08:17PM
  27. Harry Potter and the Methods of Rationality
  28. http://lesswrong.com/lw/3wg/hugo_awards_hpmor/3dnb
  29. http://lesswrong.com/lw/9on/hugo_awards_hpmor_part_2/
  30. http://hpmor.com/notes/119/
  31. http://su3su2u1.tumblr.com/tagged/Hariezer-Yudotter/chrono/
  32. http://lesswrong.com/lw/k9r/cognitive_biases_due_to_a_narcissistic_parent/
  33. http://www.sl4.org/archive/0707/16399.html
  34. Review
Personal tools
Namespaces

Variants
Actions
Navigation
Community
Tools
support