Eliezer Yudkowsky
Carefully, correctly LessWrong |
Singularity blues |
“”Before Bruce Schneier goes to sleep, he scans his computer for uploaded copies of Eliezer Yudkowsky.[1]
|
Eliezer Yudkowsky (1979–) is an American AI researcher, blogger, and exponent of human rationality (specifically, his Bayes-based version of it). Yudkowsky cofounded and works at the Machine Intelligence Research Institute (formerly the Singularity Institute for Artificial Intelligence), a nonprofit organization that concerns itself with the concept known as the singularity.[2]
Yudkowsky also founded the blog community LessWrong as a sister site and offshoot of Overcoming Bias, where he began his blogging career with GMU economist Robin Hanson. Being an idealistic fellow who wants everyone to live forever, he also has an overweening interest in cryonics, and is (probably) vulnerable to hemlock. As a self-described autodidact, he has no formal training in the subjects he writes about.
AI "research"
“”Find whatever you’re best at; if that thing that you’re best at is inventing new math[s] of artificial intelligence, then come work for the Singularity Institute. If the thing that you’re best at is investment banking, then work for Wall Street and transfer as much money as your mind and will permit to the Singularity Institute where [it] will be used by other people.
|
—Eliezer Yudkowsky on how to save the human race.[3] |
Yudkowsky believes he has identified a "big problem in AI research" which no one else had previously noticed. The problem is there is no reason to assume an AI would give a damn about humans or what we care about in any way at all, given that it won't have a million years as a savannah ape or a billion years of evolution in its makeup to build up any morality or aversion to killing us. He also believes AI is imminent. As such, working out how to create a Friendly AI (FAI) — one that won't kill us, inadvertently or otherwise — is the Big Problem he has taken as his own to solve.[4]
The problem scenario is known as "AI foom," which is short for "recursively self-improving Artificial Intelligence engendered singularity." It is based on the ideas that:
- Despite glacial progress in AI so far, the AI-based Singularity will occur soon.
- AIs would want to improve themselves (and we would want them to).
- Human-level or better AIs could not be imprisoned or threatened with having their plugs pulled, because they would be able to talk their way out of the situation. Interestingly, Yudkowsky claims to have tested this idea by role-playing the part of supersmart AI, with a lesser mortal playing the jailer. However — the reader may note the beginning of a pattern here — the transcripts have never been published.
- AIs will also at some point be able to gain virtually unlimited control over the physical world through nanotechnology.
His friend, fellow transhumanist, director of the Future of Humanity institute at the University of Oxford, philosopher and existential risk theorist Nick Bostrom has written a book-length statement of the above ideas in his 2014 book Superintelligence: Paths, Dangers, Strategies. Bostrom's central thesis appears to be "you can't prove this won't happen, therefore you should worry about it." Bostrom credits Yudkowsky in the acknowledgements section and quotes from him in the book.
As is evident from the dialogue between Luke Muehlhauser and Pei Wang,[5] Yudkowsky's (and thus LessWrong's) conception of Artificial General Intelligence (AGI) differs starkly from the mainstream scientific and practical research understanding of it. Yudkowsky believes an AGI will be based on some form of decision theory (although all his examples of decision theories are computationally intractable) and/or implement some form of Bayesian logic (which has been tried in real AI, but does not scale).
Since these are both totally abstract mathematical functions, there is no reason an AGI based on them would share human values (not that you can base an AGI on them). By this reasoning, Yudkowsky sees the need to bolt on some sort of morality module which will somehow be immune from alteration, even though the "foom" scenario he fears so much is based on AGIs which can rewrite their own code. (Bostrom points out in his book that there are several situations in which an AI might have an incentive to alter its own final goals.) Thus Yudkowsky sees the need to "solve" ethics in some form that can be computerized — although ethics remains an open problem in philosophy after thousands of years. However, Plato, Aristotle and Kant just weren't as smart as Yudkowsky believes himself to be.
Anyways, actual AI research isn't based on implementing unimplementable functions; it is based on trying to reproduce known examples of natural intelligence, especially human intelligence, in all its messy glory; and it is based on learning.
In addition, some of Yudkowsky's critics identify more urgent threats than unfriendliness: deliberate weaponisation of AIs, coding errors, humans themselves, cyborgisation,[6] etc.
Yudkowsky/Hanson debates
Economist Robin Hanson, Yudkowsky's then co-blogger, has debated Yudkowsky on "AI foom" on several occasions. Hanson, who had previously been an AI researcher for several years[7], wrote a series of blog posts with Yudkowsky in 2008 and debated him in person in 2011. These debates were later collected and published as an ebook.[8] In 2014, after having read Bostrom's book Superintelligence, Hanson was sticking to his guns: he wrote another blog post entitled "I Still Don't Get Foom"[9]. He argued:
“”“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g. weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme.
|
It was immediately after this debate that Yudkowsky left Overcoming Bias (now Hanson's personal blog) and moved the Sequences to LessWrong.
They still weren't seeing eye to eye in 2016 after the famous Go match between Google's Alpha Go software and Lee Sedol.[7] Hanson said he thought that general purpose AI was analogous to nation states, requiring a lot of workers and interchange of ideas, whereas limited special-purpose AI was analogous to product teams in businesses, and gave a criterion by which his theory could, in principle, be tested — whereupon Yudkowsky said that he thought that the world would already have "ended" before there was a chance to verify Hanson's theory! Yudkowsky claimed that different AI labs were doing their own AI training and there was almost no exchange of pre-trained modules or specialist components. (However, general AI components are widely reused between companies.[10])
Yudkowsky had previously announced his belief that if (as in fact happened) Alpha Go beat the Go grand master, that would mean the singularity was getting closer — despite the fact that the previous occasions when computers beat top-ranked humans at a board game (checkers and chess) still left computers well below the intelligence of a human infant.
Singularity Institute / MIRI
In 1998, Eliezer Yudkowsky finished the first version of 'Coding a Transhuman AI', alternatively known as 'Notes on the Design of Self-Enhancing Intelligence'.[11] After that, he spent a few months writing design notes for a new programming language named Flare. Thereafter, he wrote 'Plan to Singularity'[12] presenting the idea of a Singularity Institute. In May 2000, at the meeting of Foresight Institute [13], the idea of establishing the institute got support from two others. Now, the Singularity Institute was established with a website [14]. Then he wrote a revised version of 'Coding a Transhuman AI' [15]. In July 23, SI started the Flare project under the leadership of one Dimitry Myshkin. The project was cancelled in 2003. [16] Since the very start, Eliezer Yudkowsky assigned importance to creating 'Friendly AI'. In 2001, he wrote a book-length work 'Creating a Friendly AI'.[17][18]
In 2004, Eliezer Yudkowsky got a 'major insight'. It was Coherent Extrapolated Volition (CEV). A person might not always want what is really best for him. But, Yudkowsky wants his AI to do what is really good for people. But how would the AI solve the problem? Well, he believes that to find what is best for a person, the AI would scan the person's brain and do ' something complicated '(Eliezer's words).To ensure that AI is friendly to all humans, it would do this ' something complicated' to everyone. The sheer amount of computing power needed to do this, is of course, incomprehensibly large. Neither do we have any idea how this would look like in reality, if implemented. [19]
Until 2008 or 2009, Singularity Institute was practically a one-man show. All its publications bore the name of Eliezer Yudkowksy and there was no other full-time employee of SI. However, change came in 2008 with the emergence of LessWrong. Soon, new publications by new authors like Carl Shulman, N Tarleton and after some time Luke Muelhauser began to arrive. And this became the norm, with most papers being written by the newly inducted researchers.
The new focus of research at SI became decision theory. In 2010, Eliezer Yudkowsky published Timeless Decision Theory.[20]
Another thing became visible by that time. The collaboration with Nick Bostrom's Future of Humanity Institute. Multiple joint articles by SI researchers and FHI researchers like A. Sandberg[21], Stuart Armstrong etc. appeared. A joint article of Eliezer Yudkowsky and Nick Bostrom appeared in 2011. [22] Bostrom's 2014 book Superintelligence refers to Yudkowsky 15 times. There was also nearly 5 pages on Coherent Extrapolated Volition.[23] In 2009, Jeffery Epstein donated $50000 to SIAI.[24]
Other projects
Fan fiction
“”Oh god. The future of the human species is in the hands of a guy who writes crossover fan-fiction.
|
—Scott Alexander[25] |
For a slightly more humorous but still fairly insightful look into his mind, you can read his Harry Potter fan fiction. Yudkowsky is not in fact all that invested in the Harry Potter 'verse. He wanted to use the fanfic as a platform to teach rationality to a wider audience, because he felt that lack of rationality was causing people to reject his ideas about AI and transhumanism. In the story a rationalist Harry (whom he has altered into a total Mary Sue author avatar[26]) tries to get his head around magic[27] whilst delivering lectures on science to his elders and the audience.
It's a cracking good read for fanfic (though for best results, just skip straight from chapter 30 to chapter 100) and is very highly rated on FanFiction.net. Yudkowsky and his fans believe it is Hugo-quality material;[28][29] Yudkowsky asked his fans to spam the 2016 nomination for Best Novel[30][31] (because after the Sad Puppies, spamming the Hugos will surely convince people that you're a genius rather than some sort of stupendous cockhead), though it didn't make the shortlist. Some readers were less impressed, giving up fairly early because of "Hariezer's" arrogance and sermonising.[32] As Yudkowsky himself notes in a facebook posting:[33]
People who have this [status regulating] emotion leave angry reviews on Chapter 6 of HPMOR where they complain about how much Harry is disrespecting Professor McGonagall and they can’t read this story.
Others non-fans stuck around to write detailed critiques and take issue with the didactic content.[34] It was also slightly hilarious when someone posted to LessWrong a fairly normal literary analysis of the main character as a narcissist, apparently without realising to what degree he was a self-insert.[35]
LessWrong may have developed a little eeny weeny tiny bit of a personality cult around Yudkowsky, with a list of Eliezer Yudkowsky Facts,[1] but the fan fiction of him was apparently a play, produced without his knowledge.[36][37]
Arbital
After tiring of LessWrong and finally finishing his Harry Potter fanfic, Yudkowsky's next project was Arbital, an exciting new competitor in the educational market with a different approach to learning to that of Wikipedia.[38] He first proposed it in March 2015:[30]
First, I’ve designed an attempted successor to Wikipedia and/or Tumblr and/or peer review, and a friend of mine is working full-time on implementing it. If this project works at the upper percentiles of possible success (HPMOR-level ‘gosh that sure worked’, which happens to me no more often than a third of the time I try something), then it might help to directly address the core societal problems (he said vaguely with deliberate vagueness).
With $300k in funding, developers were hired, with Yudkowsky taking an oversight role. There were a number of changes of direction. It went live in early 2016 as the LessWrong Sequences in a different bottle; by late 2016 it tagged itself as "the place for crowdsourced, intuitive math explanations", which is an area in which it might in fact have had a chance of beating the notoriously opaque math articles on Wikipedia except they couldn't find enough knowledgeable people to write them. As of early 2017, the front page was a Slashdot-style aggregator for people in the MIRI subculture; by the end of March, they finally admitted defeat[39] and turned the front page into a coming-soon for a blogging platform.
A project developer lamented Yudkowsky's leadership style.
Also around that time, it became clear to us that things just weren’t going well. The primary issue was that we completely relied on Eliezer to provide guidance to which features to implement and how to implement them. Frequently when we tried to do things our way, we were overruled. (Never without a decent reason, but I think in many of those cases either side had merit. Going with Eliezer’s point of view meant we frequently ended up blocked on him, because we couldn’t predict the next steps.) Also, since Eliezer was the only person seriously using the product, there wasn’t enough rapid feedback for many of the features. And since we wasn't in the office with us every day, we were often blocked.
So, we decided to take the matter into our own hands. The three of us would decide what to do, and we would occasionally talk to Eliezer to get his input on specific things.[40]
Achievements
“”Why should I follow your Founder when he isn’t an Eighth Level anything outside his own cult?
|
—Eliezer Yudkowsky[41] |
It is important to note that, as well as no training in his claimed field, Yudkowsky has pretty much no accomplishments of any sort to his credit beyond getting Peter Thiel to give him money. (And moving on to Ethereum cryptocurrency programmers.)
Even his fans admit "A recurring theme here seems to be 'grandiose plans, left unfinished'."[42] He claims to be a skilled computer programmer, but has no code available other than Flare, an unfinished computer language for AI programming with XML-based syntax.[43] (It was supposed to be the programming language in which SIAI would code Seed AI, however it was cancelled in 2003). He found it necessary to hire professional programmers to implement his Arbital project (see above) , which eventually failed.
Yudkowsky is almost entirely unpublished outside of his own foundation and blogs[44] and never finished high school, much less did any actual AI research. No samples of his AI coding have been made public.
His papers are generally self-published and had a total of two cites on JSTOR-archived journals (neither to do with AI) as of 2015, one of which was from his friend Nick Bostrom at the closely-associated Future of Humanity Institute.[45]
His actual, observable results in the real world are a popular fan fiction (which to his credit he did in fact finish, unusually for the genre), a pastiche erotic light novel,[46] a large pile of blog posts and a surprisingly well-funded research organisation — that has produced fewer papers in a decade and a half than a single graduate student produces in the course of a physics Ph.D, and the latter's would be peer reviewed.
Genius or crank?
Whether Yudkowsky considers himself a genius is unclear totally clear; he refers to himself as a genius six times in his "autobiography." He also claims to have "randomly won the writing talent lottery".[47] He thinks most PhD's are pretty ordinary compared to the people he hobnobs with:
The really striking fact about the researchers who show up at AGI conferences, is that they're so... I don't know how else to put it...
...ordinary.
Not at the intellectual level of the big mainstream names in Artificial Intelligence. Not at the level of John McCarthy or Peter Norvig (whom I've both met).
I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them -- just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified.
However he admits to possibly being less smart than John Conway.[48] As a homeschooled individual with no college degree, Yudkowsky may not be in an ideal position to estimate his own intelligence. That many of his followers think he is a genius is an understatement.[49][50] Similarly, some of his followers are derisive of mainstream scientists, just look for comments about "not smart outside the lab" and "for a celebrity scientist."[51] Yudkowsky believes that a doctorate in AI is a net negative when it comes to Seed AI.[52] While Yudkowsky doesn't attack Einstein, he does indeed think the scientific method cannot handle things like the Many worlds Interpretation as well as his view on Bayes' theorem.[53] LessWrong does indeed have its own unique jargon.[54]
Just so that the reader doesn't get the mistaken impression that Yudkowsky boasts about his intellect incessantly, here he is boasting about how nice and good he is:
I took a couple of years of effort to clean up the major emotions (ego gratification and so on), after which I was pretty much entirely altruistic in terms of raw motivations, although if you'd asked me I would have said something along the lines of: "Well, of course I'm still learning... there's still probably all this undiscovered stuff to clean up..."
Disagreement with Yudkowsky's ideas is often attributed to "undiscriminating skepticism." If you don't believe cryonics works, it's because you have watched Penn & Teller: Bullshit!.[55] It's just not a possibility that you don't believe it works because it has failed tests and is made improbable by the facts.[56]
More controversial positions
Despite being viewed as the smartest two-legged being to ever walk this planet on LessWrong, Yudkowsky (and by consequence much of the LessWrong community) endorses positions as TruthTM that are actually controversial in their respective fields. Below is a partial list:
- Transhumanism is correct. Cryonics might someday work. The Singularity is near![citation NOT needed]
- Bayes' theorem and the scientific method don't always lead to the same conclusions (and therefore Bayes is better than science).[57]
- Bayesian probability can be applied indiscriminately.[58]
- Non-computable results, such as Kolmogorov complexity, are totally a reasonable basis for an entire epistemology. Solomonoff, baby!
- Many Worlds Interpretation (MWI) of quantum physics is correct (a "slam dunk"), despite the lack of consensus among quantum physicists.[59] Yudkowsky says that a majority of theoretical physicists may or may not share his view, adding that attempted opinion polls conflict on this point.[60]
- Evolutionary psychology is well-established science.
- Philosophy is a subject so easy he can "re-invent it on the fly".[61]
- Utilitarianism is a correct theory of morality. In particular, he proposes a framework by which an extremely, extremely huge number of people experiencing a speck of dust in their eyes for a moment could be worse than a man being tortured for 50 years.[62] More specifically, Yudkowsky defends total utilitarianism in this thought experiment.
Also, while it is not very clear what his actual position is on this, he wrote a short sci-fi story where rape was briefly mentioned as legal.[63] That the character remarking on it didn't seem to be referring to consensual sex in the same way we do today didn't prevent a massive reaction in the comments section. He responded "The fact that it's taken over the comments is not as good as I hoped, but neither was the reaction as bad as I feared." He described the science fiction world he had in mind as a "Weirdtopia" rather than a dystopia.[64]
External links
- Yudkowsky's autobiography, written by a 20-year-old Yudkowsky in 2000. (To be fair, he explicitly disclaims it now and considers his younger self an idiot.)
- Making HAL Your Pal by Declan McCullagh, Wired, April 19, 2001.
- Interview with Yudkowsky by John Horgan, Scientific American, March 1, 2016
- yudkowsky.net
- Arbital
- Machine Intelligence Research Institute (MIRI) web page
References
- ↑ 1.0 1.1 Eliezer Yudkowsky Facts
- ↑ https://intelligence.org/team/
- ↑ http://kruel.co/2012/05/13/eliezer-yudkowsky-quotes/
- ↑ Yudkowsky's opus, Creating Friendly AI
- ↑ http://lesswrong.com/lw/bxr/muehlhauserwang_dialogue/
- ↑ http://lesswrong.com/r/discussion/lw/9a1/qa_with_experts_on_risks_from_ai_2/
- ↑ 7.0 7.1 https://www.facebook.com/yudkowsky/posts/10154018209759228?pnref=story
- ↑ http://www.overcomingbias.com/2013/09/debate-is-now-book.html
- ↑ http://www.overcomingbias.com/2014/07/30855.html
- ↑ Why TensorFlow will change the game for AI
- ↑ http://web.archive.org/web/20010202165700/http://sysopmind.com/AI_design.temp.html
- ↑ http://web.archive.org/web/20010209014150/http://sysopmind.com/sing/PtS/plan.html
- ↑ http://web.archive.org/web/20010428233716/http://www.foresight.org/SrAssoc/spring2000/
- ↑ http://web.archive.org/web/20000914073559/http://www.singinst.org/
- ↑ http://web.archive.org/web/20010202042100/http://singinst.org/CaTAI.html
- ↑ https://timelines.issarice.com/wiki/Timeline_of_Machine_Intelligence_Research_Institute
- ↑ https://intelligence.org/files/CFAI.pdf
- ↑ This table shows the quantity of donation received in the years 2000-2009.
- ↑ https://intelligence.org/files/CEV.pdf
- ↑ https://intelligence.org/files/TDT.pdf
- ↑ https://intelligence.org/files/SoftwareLimited.pdf
- ↑ https://nickbostrom.com/ethics/artificial-intelligence.pdf
- ↑ https://korin-consulting.com/pdf/NickBostrom_Superintelligence.pdf
- ↑ https://www.miamiherald.com/news/state/florida/article233028682.html
- ↑ Yvain comments on The Finale of the Ultimate Meta Mega Crossover. Scott Alexander, LessWrong, 23 September 2009 10:08:17PM
- ↑ Guess who else bit his math teacher at age seven?
- ↑ Harry Potter and the Methods of Rationality
- ↑ http://lesswrong.com/lw/3wg/hugo_awards_hpmor/3dnb
- ↑ http://lesswrong.com/lw/9on/hugo_awards_hpmor_part_2/
- ↑ 30.0 30.1 http://hpmor.com/notes/119/
- ↑ http://asocratesgonemad.tumblr.com/post/138311551872/just-recieved-an-email-to-hpmor-subscribers-from
- ↑ Why are people put off by rationality?
- ↑ Post by Eliezer Yudkowsky
- ↑ su3su2u1's exhaustive review, chapter by chapter and final summary.
- ↑ Cognitive Biases due to a Narcissistic Parent, Illustrated by HPMOR Quotations (Algernoq, LessWrong, 24 May 2014) — some stupendous special pleading in the comments from aggrieved fanboys
- ↑ http://www.sl4.org/archive/0707/16399.html
- ↑ Review
- ↑ https://arbital.com/p/arbital_ambitions/
- ↑ http://lesswrong.com/r/discussion/lw/otq/whats_up_with_arbital/
- ↑ https://www.lesserwrong.com/posts/kAgJJa3HLSZxsuSrf/arbital-postmortem
- ↑ http://lesswrong.com/lw/p0/to_spread_science_keep_it_secret/
- ↑ Discussion: Yudkowsky's actual accomplishments besides divulgation (Raw Power, LessWrong, 25 June 2011)
- ↑ http://flarelang.sourceforge.net/
- ↑ There is a conference proceeding: Eliezer Yudkowsky (2011). Complex Value Systems are Required to Realize Valuable Futures. In Proceedings of AGI 2011.
- ↑ Yuval Feldman and Oren Perez. "How Law Changes the Environmental Mind: An Experimental Study of the Effect of Legal Norms on Moral Perceptions and Civic Enforcement." Journal of Law and Society Vol. 36, No. 4 (Dec., 2009), pp. 501-535; Nick Bostrom. "Pascal's mugging." Analysis Vol. 69, No. 3 (July 2009), pp. 443-445
- ↑ A Girl Corrupted by the Internet is the Summoned Hero?!
- ↑ Checking in with r/badphil's old friend Eliezer Yudkowsky
- ↑ The Level above Mine
- ↑ http://lesswrong.com/lw/4g/eliezer_yudkowsky_facts/1qhd?c=1
- ↑ http://lesswrong.com/lw/d6/the_end_of_sequences/aad?c=1
- ↑ http://lesswrong.com/lw/8f4/neil_degrasse_tyson_on_cryonics/
- ↑ "I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified."
- ↑ http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/
- ↑ http://wiki.lesswrong.com/wiki/Jargon
- ↑ We don't need P&T to prove it wrong.
- ↑ Is Cryonics Feasible?, Quackwatch
- ↑ Eliezer Yudkowsky on Bayes and Science, Massimo Pigliucci, Rationally Speaking
- ↑ While it is a powerful tool, Bayesian probability has its limitations like any other. See Stanford Encyclopedia of Philosophy's entry for Potential Problems with Bayesian Epistemology
- ↑ " I can think of only three slam-dunks off the top of my head: Atheism: Yes. Many-worlds: Yes. P-zombies: No. "
- ↑ Eliezer Yudkowsky: "And the Winner is... Many-Worlds!"
- ↑ Eliezer Yudkowsky: Lesswrong Rationality and Mainstream Philosophy
- ↑ Less Wrong: Torture vs Dust Specks
- ↑ http://lesswrong.com/lw/y8/interlude_with_the_confessor_48/
- ↑ http://lesswrong.com/lw/y8/interlude_with_the_confessor_48/qt7