Eliezer Yudkowsky

From RationalWiki
(Difference between revisions)
Jump to: navigation, search
Line 3: Line 3:
 
'''Eliezer Yudkowsky''' (b. 1979) is an AI researcher, blogger, and prominent exponent of human [[rationalism]] (at least, his version of it).  Yudkowsky cofounded, and works at, the Singularity Institute for Artificial Intelligence, a nonprofit organization that concerns itself with the concept known as the [[Singularity]].<ref>[http://singinst.org/aboutus/team Yudkowsky and friends at the Institute]</ref>
 
'''Eliezer Yudkowsky''' (b. 1979) is an AI researcher, blogger, and prominent exponent of human [[rationalism]] (at least, his version of it).  Yudkowsky cofounded, and works at, the Singularity Institute for Artificial Intelligence, a nonprofit organization that concerns itself with the concept known as the [[Singularity]].<ref>[http://singinst.org/aboutus/team Yudkowsky and friends at the Institute]</ref>
  
Yudkowsky also founded the blog community [[LessWrong]] as a sister site and offshoot of Overcoming Bias, where he began his blogging career with GMU economist Robin Hanson. Being a very smart young man who wants everyone to live forever, he also has an overweening interest in [[cryonics]].
+
Yudkowsky also founded the blog community [[LessWrong]] as a sister site and offshoot of Overcoming Bias, where he began his blogging career with GMU economist Robin Hanson. Being a very smart young man who wants everyone to live forever, he also has an overweening interest in [[cryonics]] and is (probably) vulnerable to hemlock.
  
 
==AI research==
 
==AI research==

Revision as of 16:20, 29 November 2011

The man in 2006. Who knows what he looks like today...
Before Bruce Schneier goes to sleep, he scans his computer for uploaded copies of Eliezer Yudkowsky.
[1]

Eliezer Yudkowsky (b. 1979) is an AI researcher, blogger, and prominent exponent of human rationalism (at least, his version of it). Yudkowsky cofounded, and works at, the Singularity Institute for Artificial Intelligence, a nonprofit organization that concerns itself with the concept known as the Singularity.[2]

Yudkowsky also founded the blog community LessWrong as a sister site and offshoot of Overcoming Bias, where he began his blogging career with GMU economist Robin Hanson. Being a very smart young man who wants everyone to live forever, he also has an overweening interest in cryonics and is (probably) vulnerable to hemlock.

Contents

AI research

Yudkowsky identifies the big problem in AI research as being that there is no reason to assume an AI would give a damn about humans or what we care about in any way at all - not having a million years as a savannah ape or a billion years of evolution in its makeup to build up any morality or aversion to killing us. And he believes AI is imminent. As such, working out how to create a Friendly AI (FAI) - one that won't kill us, inadvertently or otherwise - is the Big Problem he has taken as his own to solve.[3]

Fan fiction

Oh god. The future of the human species is in the hands of a guy who writes crossover fan-fiction.
[4]

For a slightly more humorous but still fairly insightful look into his mind, you can read his Harry Potter fan fiction, where a rationalist Harry (a total author avatar) tries to get his head around magic.[5] This is expressly intended as propaganda for rationalism, but it's a cracking good read for fanfic and is very highly rated on FanFiction.net. For what that's worth.

LessWrong may have developed a little eeny weeny tiny bit of a personality cult around Yudkowsky, with a list of Eliezer Yudkowsky Facts[1], but the fan fiction of him was apparently a play, produced without his knowledge.[6]

See also

Footnotes

  1. 1.0 1.1 Eliezer Yudkowsky Facts
  2. Yudkowsky and friends at the Institute
  3. Yudkowsky's opus, Creating Friendly AI
  4. [1] Yvain, LessWrong, 23 September 2009 10:08:17PM
  5. Harry Potter and the Methods of Rationality
  6. http://www.sl4.org/archive/0707/16399.html
Personal tools
Namespaces

Variants
Actions
Navigation
Community
Tools
support