Roko's basilisk

From RationalWiki
(Difference between revisions)
Jump to: navigation, search
(we got linked on Warren Ellis's blog - warn that this article doesn't make sense yet)
Line 5: Line 5:
 
'''Roko's basilisk''' is a proposition suggested by a member of the [[rationalism|rationalist]] community [[LessWrong]], which speculates about the potential behavior of a future [[god]]like [[artificial intelligence]].
 
'''Roko's basilisk''' is a proposition suggested by a member of the [[rationalism|rationalist]] community [[LessWrong]], which speculates about the potential behavior of a future [[god]]like [[artificial intelligence]].
  
The claim is that this ultimate intelligence may [[punish]] those who fail to help it, with greater punishment accorded those who knew the importance of the task. The weird bit is that the AI and the person punished have ''no causal interaction'': the punishment would be of a simulation of the person, which the AI would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you.
+
The claim is that this ultimate intelligence may [[punish]] those who fail to help it, with greater punishment accorded those who knew the importance of the task. That bit is simple enough, but the weird bit is that the AI and the person punished have ''no causal interaction'': the punishment would be of a simulation of the person, which the AI would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you.
  
 
Roko's basilisk is notable for being completely banned from discussion on LessWrong; any mention is deleted. [[Eliezer Yudkowsky]], founder of LessWrong, considers the basilisk would not work, but will not explain why because he does not want discussion of the notion of acausal trade with unfriendly possible superintelligences.
 
Roko's basilisk is notable for being completely banned from discussion on LessWrong; any mention is deleted. [[Eliezer Yudkowsky]], founder of LessWrong, considers the basilisk would not work, but will not explain why because he does not want discussion of the notion of acausal trade with unfriendly possible superintelligences.

Revision as of 20:45, 23 February 2013

Roko's basilisk is a proposition suggested by a member of the rationalist community LessWrong, which speculates about the potential behavior of a future godlike artificial intelligence.

The claim is that this ultimate intelligence may punish those who fail to help it, with greater punishment accorded those who knew the importance of the task. That bit is simple enough, but the weird bit is that the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person, which the AI would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you.

Roko's basilisk is notable for being completely banned from discussion on LessWrong; any mention is deleted. Eliezer Yudkowsky, founder of LessWrong, considers the basilisk would not work, but will not explain why because he does not want discussion of the notion of acausal trade with unfriendly possible superintelligences.

Some people familiar with the LessWrong memeplex, including Singularity Institute donors, have suffered serious psychological upset after contemplating basilisk-like ideas, even when they're fairly sure intellectually that it's a silly problem.[1] The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future unfriendly AI can't reconstruct a copy of them to torture.[2]

Contents

History

Many of the longest-term LessWrong contributors, particularly Eliezer Yudkowsky, believe in a common set of transhumanist notions.[3] These include the idea that eventually an artificial intelligence will be developed with immeasurable power and knowledge; for it not to destroy humanity, it needs a value system that completely preserves human ideas of value even though said intelligence will be as far above us as we are above ants; and that the Singularity Institute exists to make this friendly local god happen.[4][5] From this premise, the most important thing in the world is to bring this future AI into existence properly and successfully, and therefore you should give all the money you can to the Singularity Institute, who literally claim that you will save 8 lives per dollar you give them.[6]

In July of 2010, Roko, a well-respected and prolific LessWrong poster, further extended this line of reasoning:

[T]here is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. ... So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. Of course this would be unjust, but is the kind of unjust thing that is oh-so-very utilitarian.[7]

Commenters quickly complained that merely reading Roko's words had increased the likelihood that the future AI would punish them — the line of reasoning was so compelling to them that they believed the AI (which would know they'd once read Roko's idea) would now punish them even more for being aware of it and failing to donate all of their income to institutions devoted to the god-AI's development. So even looking at this idea was harmful, lending Roko's proposition the "basilisk" label (after the "basilisk" image from David Langford's science fiction stories, which was in turn named after the legendary serpent-creature from European mythology that killed those who saw it). The more sensitive and OCD-prone on LessWrong began to have nightmares.

Within four hours, Roko's post and all discussion was deleted by Yudkowsky, as the transhumanist side of LessWrong overpowered the rationalist side. Talk of the basilisk remains embargoed, although occasionally some posters discuss the matter in painfully roundabout ways.

Thanks to the Streisand effect, discussion of the basilisk and the details of the affair spread outside of LessWrong, eventually making Yudkowsky himself participate in an uncensored thread on Reddit. In true Yudkowsky fashion, he attempted to introduce his own emotionally-charged terminology for something that had already got an accepted name, calling the basilisk "the Babyfucker". Meanwhile, his main reasoning tactic was to repeatedly assert that his opponents' arguments were flawed, while refusing to publish the evidence for his claims (another recurring Yudkowsky pattern), ostensibly out of fears of existential risk.

Pascal's basilisk

The basilisk is, ironically, best understood by reference to another idea from LessWrong: Pascal's mugging. This tactic, which borrows its name from Pascal's Wager, suggests that it is irrational to permit events of slight probability but infinite consequences from skewing your judgment. Roko claimed in the original thread: "It doesn't have to be probable. It just has to be not-cosmically improbable. Even 0.01% would be really bad."[7]

Essentially, even if you are fairly certain that the conclusion is nonsense, it is argued that you still should worry about the issue.

Pascal's wager in the context of LessWrong

Before you have heard of Roko's basilisk, you have effectively assigned it a probability of 0; you were acting as if it was impossible by being unaware of it. It is argued on LessWrong that 0 is not a probability[8] ; people who subscribe to such world view are uncomfortable with the idea of keeping an isolated hypothesis at the probability of 0 until they are clear what probability may be sensibly assigned to the hypothesis. Upon encountering an argument, they change the probability from effectively 0 to some small, but not very small value (0.01% in the Roko's quote above). This "probability" is then multiplied with the huge dis-utility of the outcome of the Basilisk, and resulting value is taken as "expected utility", which rational agents should maximize according to Neumann-Morgenstern utility theorem.[9]

This appears to be a result of multitude of confusions which are very difficult to describe. Firstly, the probabilities should add up to 1. If I presented you with every possible mutually exclusive hypothesis, one by one, and asked you to assign approximate probabilities to each, wiping your memory after every attempt, it is very dubious that your "probabilities" would have added up to 1. The number of hypothesis of a given length is approximately proportional to exponent of the length; the number of comparatively short speculative hypotheses is truly astronomical, and if you do not assign astronomically low probabilities to them, you end up grossly over-shooting 1. As the number of speculative hypotheses is very large and difficult to estimate, it is implausible that the sum would be anywhere near 1.

Subjective degrees of belief, thus, grossly fail to conform to the second Kolmogorov's axiom, which requires that the probabilities of mutually exclusive events sum to 1.[10] Subjective beliefs thus should not be confused with probabilities or treated as probabilities. This is directly contradicted by the "Core tenet 3 of Bayesianism" as described on LessWrong - Core tenet 3: We can use the concept of probability to measure our subjective belief in something. Furthermore, we can apply the mathematical laws regarding probability to choosing between different beliefs. If we want our beliefs to be correct, we must do so. [11].

Furthermore, 'expected utility' is the ideally computed weighted sum of values of all possible outcomes, multiplied by their probabilities. A single term of this sum is not, by itself, expected utility (It is frequently presented as such on LessWrong, though, especially in the calculations of the utility of contributions to the parent organization, Singularity Institute / MIRI). To maximize expected utility most effectively, a computationally bounded agent has to find the sign of the difference between utilities of the different actions, which requires that approximations to the utilities are computed carefully as to minimize influence of the estimation errors on the final actions. It is similar to making a good pros and cons list - you need not to miss any pros and cons, and the list must be unbiased.

In particular, one needs to take a representative, sufficiently large, unbiased sample of the hypotheses to estimate the expected utility; a small, non-representative sample of low probability high impact hypotheses can be worse, due to the introduction of the [[sample size|sampling error], than omission of such hypotheses from consideration. For example, consider a hypothesis that a flap of the wings of the butterfly will cause series of hurricanes that will kill 10000 people, through the butterfly effect.[12] If you process this hypothesis alone, you may be compelled to kill the butterfly to save those people, if you don't very quickly see an alternate hypothesis that a flap of the wings will prevent those hurricanes. Fortunately for the butterfly, the sufficient number of samples is merely 2, thanks to the symmetry. Pascal wager situations tend to lack symmetry by design, and to obtain acceptable sampling error one would need a very large number of comparable sample hypotheses.

As such, we have very little knowledge as of how super-intelligences would decide or what their motives might be. In the total absence of knowledge - if you had not heard of the basilisk - you would not worry about the basilisk. In the absence of knowledge the only sane option is to assign the priors to the hypothesis in such a way that the expected utilities balance out. One single highly speculative scenario out of an astronomical number of diverse scenarios differs only infinitesimally from total absence of knowledge; after reading of Roko's basilisk you are, for all practical purposes, as ignorant of the motivations of the future AIs as you had been before.

Furthermore, from the AI's perspective, committing to torture people is not without it's drawbacks - most people would respond to such threatening possibility by decreasing, rather than increasing their contributions, even if the threat is directed at other people, thus motivating the AI to play nice and not torture anyone. It is not clear why/how the threatened contributors would out-weight this influence.

Different possible AIs would want to make you do different things, and discourage trades with other AIs; just as in the Pascal's wager, a set of AIs A may want to put you in hell for deciding to donate to the set of AIs B out of the fear of hell for non contribution, and vice versa. In so much as you are unable to enumerate possible scenarios, you can not be motivated in any specific direction, which makes it as pointless for an AI to threaten you, as it is pointless for a mafia boss to make threats that he can't even deliver to the recipient, or follow up on threats that were never received.

Susceptibility of LessWrong readership to Pascal's Wager

Less Wrong is effectively run by members of MIRI (formerly Singularity Institute), a non profit which collects donations to prevent destruction of life as we know it by 'unfriendly artificial intelligence' that may be created by some other research groups. Members of MIRI believe in high value of MIRI as a charity despite it's many shortcomings[1]img. A summary of such shortcomings has been presented by Holden Karnofsky[13] from GiveWell.[14] Essentially, many top members of LessWrong perceive as rational this problematic multiplication of high value outcomes by low subjectively assigned probabilities. Furthermore, the most influential writers on the subject of rationality at LessWrong (Eliezer Yudkowsky, Luke Muehlhauser, Yvain, etc) do not have background or formal education in statistics, probability theory, and applied mathematics, nor have a track record of accomplishments otherwise indicative of superior rationality(according to Karnofsky's evaluation), and subsequently use confusing, non-standard terminology and misuse standard terminology, which may be contributing to general confusion with regards to what is rational or how expected utilities work, and subsequent susceptibility of LessWrong audience to Pascal's Wager.

Footnotes

Personal tools
Namespaces

Variants
Actions
Navigation
Community
Tools
support