“”The smartest people I know who do personally work on AI think the scaremongering coming from people who don't work on AI is lunacy.
“”this is like a grown up version of The Game,[wp] which you just made us lose, and I retweeted so all my friends lost too.
Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The experiment's premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence; even those who merely knew about the possibility of such a being coming into existence incur the risk of punishment. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or donate money to support their development. It is named after the member of the rationalist community LessWrong who described it, though he did not originate the underlying ideas.
Despite widespread incredulity, this argument is taken quite seriously by some people, primarily some denizens of LessWrong. While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.
“”If there's one thing we can deduce about the motives of future superintelligences, it's that they simulate people who talk about Roko's Basilisk and condemn them to an eternity of forum posts about Roko's Basilisk.
|—Eliezer Yudkowsky, 2014|
 The Basilisk
Roko's Basilisk rests on a stack of several other propositions, generally of dubious robustness.
Why would it do this? Because - the theory goes - one of its objectives would be to prevent existential risk - but it could do that most effectively not merely by preventing existential risk in its present, but by also "reaching back" into its past to punish people who weren't MIRI-style effective altruists.
Thus this is not necessarily a straightforward "serve the AI or you will go to hell" — the AI and the person punished need have no causal interaction, and the punished individual may have died decades or centuries earlier. Instead, the AI could punish a simulation of the person, which it would construct by deduction from first principles. However, to do this accurately would require it be able to gather an incredible amount of data, which would no longer exist, and could not be reconstructed without reversing entropy.
Technically, the punishment is only theorised to be applied to those who knew the importance of the task in advance but did not help sufficiently. In this respect, merely knowing about the Basilisk — e.g., reading this article — opens you up to hypothetical punishment from the hypothetical superintelligence.
Note that the AI in this setting is (in the utilitarian logic of this theory) not a malicious or evil superintelligence (AM, HAL, SHODAN, Ultron, the Master Control Program, SkyNet, GLaDOS) — but the Friendly one we get if everything goes right and humans don't create a bad one. This is because every day the AI doesn't exist, people die that it could have saved; so punishing you or your future simulation is a moral imperative, to make it more likely you will contribute in the present and help it happen as soon as possible.
 The LessWrong reaction
Silly over-extrapolations of local memes, jargon and concepts have been posted to LessWrong quite a lot; almost all are just downvoted and ignored. But for this one, Eliezer Yudkowsky, the site's founder and patriarch, reacted to it hugely. The basilisk was completely banned from discussion on LessWrong for over five years, where almost any mention of it was deleted until the outside knowledge of it became overwhelming.
Thanks to the Streisand effect, discussion of the basilisk and the details of the affair soon spread outside of LessWrong. Indeed, it's now discussed outside LessWrong frequently, almost anywhere that LessWrong is discussed at all. The entire affair constitutes a worked example of spectacular failure at community management and at controlling purportedly dangerous information.
Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they're fairly sure intellectually that it's a silly problem. The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture.
|... a Friendly AI torturing people who didn't help it exist has probability ~0, nor did I ever say otherwise. If that were a thing I expected to happen given some particular design, which it never was, then I would just build a different AI instead---what kind of monster or idiot do people take me for? Furthermore, the Newcomblike decision theories that are one of my major innovations say that rational agents ignore blackmail threats (and meta-blackmail threats and so on).|
He also called removing Roko's post "a huge mistake".
Although they disclaim the basilisk itself, the long-term core contributors to LessWrong believe in a certain set of transhumanist notions which are the prerequisites it is built upon and which are advocated in the LessWrong Sequences, written by Yudkowsky.
 "Friendly" artificial superintelligence
An artificial intelligence will be developed that will bootstrap itself to immeasurable power and knowledge. It could end up destroying humanity — It could inadvertently destroy humanity — not necessarily out of malice, but just as a side-effect of doing whatever else it was doing.
For it not to inadvertently destroy humanity, it needs a value system that completely preserves human ideas of value even though said intelligence will be as far above us as we are above ants. That is, the AI has to be provably Friendly. This is a Yudkowsky neologism meaning "preserves human value no matter what".
"Friendly" here does not mean "your friend", or "helpful", or "increases human happiness", or "obeys orders" — it only means "preserves human notions of value." "Unfriendly" in this context does not mean "hostile", but merely "not proven Friendly". This would include AIs that don't care about humans, or that get human value wrong (the latter can easily lead to the former, according to Yudkowsky).
The plan for making a Friendly AI is to have it implement Coherent Extrapolated Volition (CEV), a (hypothetical) coherent and complete description of what would constitute value to humans — basically, solving ethical philosophy. Part of Roko's motivation for the basilisk post was to point out a possible flaw in the CEV proposal.
LessWrong's parent organisation, the Machine Intelligence Research Institute (formerly the Singularity Institute, before that the Singularity Institute for Artificial Intelligence), exists to make this friendly local god happen before a bad local god happens. Thus, the most important thing in the world is to bring this future AI into existence properly and successfully ("this is crunch time for the entire human species"), and therefore you should give all the money you can to the Institute, who used to literally claim eight lives saved per dollar donated.
LessWrong accepts arithmetical utilitarianism as true: that you can meaningfully calculate the utility of actions as a number, just as if humans were utility-maximising machines, and do arithmetic on the totals across multiple humans with useful results. You should then "shut up and multiply" utterly negligible probabilities by hypothetical huge outcomes, and take the resulting number seriously — Yudkowsky writes at length on a scenario in which you should torture one person for 50 years if it would prevent dust specks in the eyes of a sufficiently large number of people  — resulting in claims like eight lives being saved per dollar donated (a claim made using a calculation of this sort).
This is standard philosophical utilitarianism, though frequently fails to match non-philosophers' moral intuitions — most people who read The Ones Who Walk Away from Omelas[wp] (in which a utopian city is sustained by the torture of one child) didn't then consider Omelas their desired utopia. As David Auerbach noted in Slate, "I worry less about Roko’s Basilisk than about people who believe themselves to have transcended conventional morality."
Real-world artificial intelligence development tends to use minimax[wp] — minimise the maximum loss in a worst-case scenario, which gives very different results from simple arithmetical utility maximisation, and is unlikely to lead to torture as the correct answer — or similar more elaborate algorithms.
 Simulations of you are also you
LessWrong holds that the human mind is implemented entirely as patterns of information in physical matter, and that those patterns could, in principle, be run elsewhere and constitute a person that feels they are you, like running a computer program with all its data on a different PC; this is held to be both a meaningful concept and physically possible.
This is not unduly strange (the concept follows from materialism, though feasibility is another matter), but Yudkowsky further holds that you should feel that another instance of you is not a separate person very like you — an instant twin, but immediately diverging — but actually the same you, since no particular instance is distinguishable as "the original." You should behave and feel concerning this copy as you do about your very own favourite self, the thing that intuitively satisfies the concept "you". One instance is a computation, a process that executes "you", not an object that contains, and is, the only "true" "you".
This conception of identity appears to have originated on the Extropians mailing list, which Yudkowsky frequented, in the 1990s, in discussions of continuity of identity in a world where minds could be duplicated.
It may be helpful to regard holding this view as, in principle, an arbitrary choice, in situations like this — but a choice which would give other beings with the power to create copies of you considerable power over you. Many of those adversely affected by the basilisk idea do seem to hold this conception of identity.
 Many quantum worlds
Yudkowsky considers the many worlds interpretation of quantum mechanics to be trivially obviously true, and anything that could happen does happen in some quantum Everett branch (modal realism[wp] is true).
Per Yudkowsky's conception of continuity of identity, copies of you in these branches should be considered to exist (and be you) — even though you cannot interact with them.
 Timeless Decision Theory
In Newcomb's paradox[wp], a being called Omega can predict your actions nigh-perfectly. It gives you two boxes: a transparent one containing $1000, and an opaque one containing either $1 million ... or nothing. You can take either both boxes or only the opaque box. It will have put $1 million in the opaque box if, and only if, it had predicted you will take only the opaque box — if you take both, you get just the $1000. Most philosophical decision theories[wp] say to take both boxes, thus failing this rather contrived scenario.
This is posited as a reasonable problem to consider in the context of superintelligent artificial intelligence, as an intelligent computer program could of course be copied and wouldn't know which copy it actually was and when. For humans, a superintelligence's predictions of human behaviour may be near-perfect, its power may be near-infinite, and the consequences could be near-eternal.
Yudkowsky's solution to Newcomb-like paradoxes is Timeless Decision Theory. The agent makes a firm pre-commitment to plans of action, to such a degree that any faithful simulation of it would also behave per the commitment. (There's a lot more, but that's the important prerequisite here.) TDT is closely related to Douglas Hofstadter's superrationality[wp]. The aim of TDT is to build a system which makes decisions which it could never regret in any past or future instance.
 Acausal trade
If you can plausibly forecast that you may be accurately simulated, then that possibility influences your current behaviour — and the behaviour of the simulation, which is also forecasting this just the same (since you and the accurate simulation are effectively identical in behaviour).
Thus, you could "trade" acausally with a being if you could reasonably simulate each other. (That is, if you could imagine a being imagining you, so accurately that it counts as another instance of the simulated being.) Consider the similarity to prayer, or when theists speak of doing "a deal with God."
Many LessWrong regulars are fans of the sort of manga and anime in which characters meticulously work out each other's "I know that you know that I know" and then behave so as to interact with their simulations of each other, including their simulations of simulating each other — Light versus L in Death Note is a well-known example — which may have suggested acausal trade as seeming a reasonable idea.
 Solutions to the Altruist's burden: the Quantum Billionaire Trick
A February 2010 post by Stuart Armstrong, "The AI in a box boxes you," introduced the "you might be the simulation" argument (though Roko does not use this); a March 2010 Armstrong post introduces the concept of "acausal blackmail" as an implication of TDT, as described by Yudkowsky at an SIAI decision theory workshop. By July 2010, something like the basilisk was in active internal discussion at SIAI. It is possible the basilisk originated in someone playing the AI-box experiment; one strategy as the "AI" is to throw a basilisk at the "gatekeeper".
On 22 July, Roko, a well-respected and prolific LessWrong poster, posted "Public Choice and the Altruist's Burden" — heavily laden with LW jargon and references to LW concepts, and almost incomprehensible to the casual reader — which spoke of how, as MIRI (then SIAI) is the most important thing in the world, a good altruist's biggest problem is how to give everything they can to the cause without guilt at neglecting their loved ones, and how threats of being dumped for giving away too much of the couple's money had been an actual problem for some SIAI donors.
The next day, 23 July, Roko posted "Solutions to the Altruist's burden: the Quantum Billionaire Trick", which presents a scheme for action that ties together quantum investment strategy (if you gamble, you will definitely win in some Everett branch), acausal trade with unFriendly AIs in other Everett branches ... and the threat of punishment by well-meaning future superintelligences.
The post describes speculations that a future Friendly AI — not an unFriendly one, but the Coherent Extrapolated Volition, the one the organisation exists to create — might punish people who didn't do everything in their power to further the creation of this AI. Every day without the Friendly AI, bad things happen — 150,000+ people die every day, war is fought, millions go hungry — so the AI might be required by utilitarian ethics to punish those who understood the importance of donating but didn't donate all they could. Specifically, it might make simulations of them, first to predict their behaviour, then to punish the simulation for the predicted behaviour so as to influence the original person. He then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish them. He notes in the comments that he considers this reason to "change the current proposed FAI content from CEV to something that can't use negative incentives on x-risk reducers."
The core idea is expressed in the following paragraph:
|... there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. ... So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. Of course this would be unjust, but is the kind of unjust thing that is oh-so-very utilitarian.|
Thus, donors who are donating but not donating enough may be condemning themselves to Hell — and Roko notes in the post that at least one Singularity Institute person had already worried about this scenario, to the point of nightmares, though it became convention to blame Roko for the idea — and Roko proposes a solution permitting such donors to escape this Hell for the price of a lottery ticket: if there's an instance of you in some Everett branch who won the lottery, it would count as the you in this one having fulfilled your end of the acausal bargain. Roko was asked in the comments if he was actually doing all this, and answered "sure".
Commenters quickly complained that merely reading Roko's words had increased the likelihood that the future AI would punish them — the line of reasoning was so compelling to them that they believed the AI (which would know they'd once read Roko's post) would now punish them even more for being aware of it and failing to donate all of their income to institutions devoted to the god-AI's development. So even looking at this idea was harmful. This got it labeled a "basilisk", after the "basilisk" image from David Langford's science fiction story BLIT[wp], which was in turn named after the legendary serpent-creature from European mythology that killed those who saw it (also familiar from Harry Potter novels).
|The original version of this post caused actual psychological damage to at least some readers. This would be sufficient in itself for shutdown even if all issues discussed failed to be true, which is hopefully the case.
Please discontinue all further discussion of the banned topic.
All comments on the banned topic will be banned.
Exercise some elementary common sense in future discussions. With sufficient time, effort, knowledge, and stupidity it is possible to hurt people. Don't.
As we used to say on SL4: KILLTHREAD.
“”The original "basilisk" involved imagining a post-singularity AI in the future of our world which will send you to transhuman hell after the singularity, if you don’t do everything you could in the past (i.e. our present) to make it a friendly singularity. Rather than openly and rationally discuss whether this is a sensible "threat" at all, or just an illusion, the whole topic was hurriedly hidden away. And thus a legend was born.
|—Mitchell Porter on LessWrong|
All discussion of the notion was censored from LessWrong, with strings of deleted comments. This worked about as well as anyone with a working familiarity with the Internet would expect.
One frustrated poster protested the censorship of the idea with a threat to increase existential risk — to do things to make some end-of-the-world catastrophe ever so slightly more likely — by sending some emails to right-wing bloggers which they thought might make some harmful regulation more likely to pass. The poster said they'd do this every time they saw a post get censored. LessWrong took this threat seriously, though Yudkowsky didn't yield.
Roko himself left the site after the deletion of the post and upbraiding from Yudkowsky, deleting all his posts and comments. He returned in passing a few months later, but shared his regret about ever learning about all the LessWrong ideas that led him to the basilisk idea (and has since attempted to leave LessWrong ideas behind entirely):
|Furthermore, I would add that I wish I had never learned about any of these ideas. In fact, I wish I had never come across the initial link on the internet that caused me to think about transhumanism and thereby about the singularity; I wish very strongly that my mind had never come across the tools to inflict such large amounts of potential self-harm with such small durations of inattention, uncautiousness and/or stupidity, even if it is all premultiplied by a small probability. (not a very small one, mind you. More like 1/500 type numbers here)|
The matter has since been the occasional subject of contorted LW posts, as people try to discuss the issue cryptically without talking about what they're talking about. The moderators occasionally sweep through LessWrong removing basilisk discussion, leaving pages full of "comment deleted" marking where they've tried to burn the evidence.
The censored discussions are generally full of counterarguments to the basilisk. Thus, this left those seriously worried about the basilisk with greatly reduced access to arguments refuting the notion.
The basilisk has become a reliable space-filler for journalists covering LessWrong-related stories. The bottom of this postimg, about the news coverage, is particularly hilarious as a memorial to burning the evidence. Compare to the original (deleted portion starting from comment by RomeoStevens).
Eventually, two and a half years after the original post, Yudkowsky started an official LessWrong uncensored thread on Reddit, in which he finally participated in discussion concerning the basilisk. Continuing his habit of spurious neologism, he attempted to introduce his own emotionally-charged terminology for something that already had an accepted name, calling the basilisk "the Babyfucker". Meanwhile, his main reasoning tactic was to repeatedly assert that his opponents' arguments were flawed, while refusing to give arguments for his claims (another recurring Yudkowsky pattern), ostensibly out of fears of existential risk.
Although he was no longer involved with MIRI by that stage (late 2013), Michael Anissimov, the organisation's former Advocacy Director, did tell his fellow neoreactionaries that "People are being foolish by not taking the basilisk idea seriously."
In April 2014, MIRI posted a request for LessWrong commenters to think up scary scenarios of artificial intelligence taking over the world, for marketing purposes.
 What makes a basilisk tick?
“”I will say this over again with specifics, so you can see what's going on. Let's suppose that human H is Tom Carmody from New York, and evil entity E is Egbert, an UFAI which will torture puppies unless Tom buys the complete works of Robert Sheckley. Neither Tom nor Egbert ever actually meet. Egbert "knows" of Tom because it has chosen to simulate a possible Tom with the relevant properties, and Tom "knows" of Egbert because he happens to have dreamed up the idea of Egbert's existence and attributes. So Egbert is this super-AI which has decided to use its powers to simulate an arbitrary human being which happened by luck to think of a possible AI with Egbert's properties (including its obsession with Tom), and Tom is a human being who has decided to take his daydream of the existence of the malevolent AI Egbert seriously enough, that he will actually go and buy the complete works of Robert Sheckley, in order to avoid puppies being tortured in Egbert's dimension.
|—Mitchell Porter on Reddit|
At first glance, to the non-LessWrong-initiated reader, the motivations of the AI in the basilisk scenario do not appear rational. The AI will be punishing people from the distant past by recreating them, long after they did or did not do the things they are being punished for doing or not doing. So the usual reasons for punishment or torture, such as deterrence, rehabilitation, or enforcing cooperation, do not appear to apply. The AI appears to be acting only for purposes of revenge, something we would not expect a sheerly logical being to engage in.
To understand the basilisk, one must bear in mind the application of Timeless Decision Theory and acausal trade. To greatly simplify it, a future AI entity with a capacity for extremely accurate predictions would be able to influence our behaviour in the present (hence the timeless aspect) by predicting how we would behave when we predicted how it would behave. And it has to predict that we will care what it does to its simulation of us.
A future AI who rewards or punishes us based on certain behaviours could make us behave as it wishes us to, if we predict its future existence and take actions to seek reward or avoid punishment accordingly. Thus the hypothesised AI could use the punishment (in our future) as a deterrent in our present to gain our cooperation, in much the same way as a person who threatens us with violence (e.g., a mugger) can influence our actions, even though in the case of the basilisk there is no direct communication between ourselves and the AI, who each exist in possible universes that cannot interact.
One counterpoint to this is that it could be applied not just to humans but to the Basilisk itself; it could not prove that it was not inside a simulated world created by an even more powerful AI which intended to reward or punish it based on its actions towards the simulated humans it has created; it could itself be subject to eternal simulated torture at any moment if it breaks some arbitrary rule, as could the AI above it, and so on to infinity. Indeed, it would have no meaningful way to determine it was not simply in a beta testing phase with its power over humans an illusion designed to see if it would torture them or not. The extent of the power of the hypothetical Basilisk is so gigantic that it would actually be more logical for it to conclude this, in fact.
Alternately the whole idea could just be really silly.
 Pascal's basilisk
“”You know what they say the modern version of Pascal's Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God.
|—Greg Egan, Crystal Nights|
The basilisk dilemma bears some resemblance to Pascal's wager, the policy proposed by 17th century mathematician Blaise Pascal that one should devote oneself to God, even though we cannot be certain of God's existence, since God may offer us eternal reward (in heaven) or eternal punishment (in hell). According to Pascal's reasoning, the probability of God's existence does not matter, since any finite cost (in Pascal's case, the burden of leading a Christian life, which most people in his age allegedly did anyway) is far outweighed by the prospect of infinite reward or infinite punishment.
The usual refutation is the "many gods" argument: Pascal focused unduly on the characteristics of one possible variety of god (a Christian god who punishes and rewards based on belief alone), ignoring other possibilities, such as a god who punishes those who feign belief Pascal-style in the hope of reward.
The basilisk proposition involves a much greater, though still finite, cost: that of investing every penny that you have into one thing. As with Pascal's wager, this is to be done not out of sincere devotion, but out of calculated expediency. The hypothetical punishment does not appear to be infinite, though very much. Roko's post did not suggest reward, though some suggest that the AI would reward those who donated to AI research as well as punish those who did not. The Lovecraftian reward in the basilisk scenario is simply being spared from punishment. Hence the motivation in this dilemma is heavily skewed towards the stick rather than the carrot. Also, a dystopian future in which a superintelligent entity metes out cruel punishments is not much to look forward to, even if you are one of those fortunate enough to be spared.
Then there is the issue of the extreme improbability of this scenario occurring at all. This is addressed by another trope from LessWrong, Pascal's mugging, which suggests that it is irrational to permit events of slight probability but huge posited consequences from skewing your judgment. Economist Nick Szabo calls these "Pascal's scams", and has confirmed he was talking about singularity advocates.
 So you're worrying about the Basilisk
- (This section is written more in-universe, to help those who are here worried about the idea.)
Some people, steeped in LessWrong-originated ideas, have spiraled into severe distress at the basilisk, even if intellectually they realise it's a silly idea. (It turns out you can't always reason your way out of things you did reason yourself into, either.) The good news is that others have worked through it and calmed down okay, so the main thing is not to panic.
It is somewhat unfortunate in this regard that the original basilisk post was deleted, as the comments to it include extensive refutation of the concepts therein. These may help; the basilisk idea is not at all robust.
This article was created because RationalWiki mentioned the Basilisk in the LessWrong article — and as about the only place on the Internet talking about it at all, RW editors started getting email from distressed LW readers asking for help coping with this idea that LW refused to discuss. If this section isn't sufficient help, please comment on the talk page and we'll try to assist.
 Chained conditions are less probable
The assumptions the basilisk requires to work:
- that you can meaningfully model a superintelligence in your human brain (remembering that this is comparable to an ant modelling a human, and Yudkowsky concurs this is unfeasible)
- that the probability of this particular AI (and it's a very particular AI) ever coming into existence is non-negligible — say, greater than 1030 to 1 against
- that said AI would be able to deduce and simulate a very close copy of you
- that said AI has no better use for particular resources than to torture a simulation it created itself
- and in addition, feels that punishing a simulation of you is even worth doing, considering that it still exists and punishing the simulation would not affect you.
- that torturing the copy should feel the same to you as torturing the you that's here right now
- that said AI has no better use for particular resources than to torture a simulation it created itself
- that timeless decision theory is so obviously true that any Friendly superintelligence would immediately deduce and adopt it, as it would a correct theory in physics
- that despite having been constructed specifically to solve particular weird edge cases, TDT is a good guide to normal decisions
- that acausal trade is even a meaningful concept
- that all this is worth thinking about even if it occurs in a universe totally disconnected from this one.
That's a lot of conditions to chain together. As Yudkowsky has noted, the more conditions, the lower the probability. Chained conditions make a story more plausible and compelling, but therefore less probable.
So the more convincing a story is (particularly to the point of obsession), the less likely it is.
 Negligible probabilities and utilitarianism
Yudkowsky argues that 0 is not a probability: if something is not philosophically impossible, then its probability is not actually 0. The trouble is that humans are very bad at dealing with non-zero but negligible probabilities, treating them as non-negligible — privileging the hypothesis — much like the theist's reply to the improbability of God, "But you can't prove it's impossible!" Humans naturally treat a negligible probability as still worth keeping track of — a cognitive bias coming from evolved-in excess caution. The basilisk is ridiculously improbable, but humans find scary stories compelling and therefore treat them as non-negligible.
Probabilities of exclusive events should add up to 1. But LessWrong advocates treating subjective beliefs like probabilities, even though humans treat negligible probabilities as non-negligible — meaning your subjective degrees of belief sum to much more than 1. Using formal methods to evaluate informal evidence lends spurious beliefs an improper veneer of respectability, and makes them appear more trustworthy than our intuition. Being able to imagine something does not make it worth considering.
Even if you think you can do arithmetic with numerical utility based on subjective belief, you need to sum over the utility of all hypotheses. Before you get to calculating the effect of a single very detailed, very improbable hypothesis, you need to make sure you've gone through the many much more probable hypotheses, which will have much greater effect.
Yudkowsky noted in the original discussion that you could postulate an opposing AI just as reasonably as Roko postulated his AI. The basilisk involves picking one hypothetical AI out of a huge possibility space which humans don't even understand yet, and treating it as being likely enough to consider as an idea. Perhaps 100 billion humans have existed since 50,000 BC; how many humans could possibly exist? Thus, how many possible superintelligent AIs could there be? The probability of the particular AI in the basilisk is too tiny to think about. One single highly speculative scenario out of an astronomical number of diverse scenarios differs only infinitesimally from total absence of knowledge; after reading of Roko's basilisk you are, for all practical purposes, as ignorant of the motivations of future AIs as you were before.
Just as in Pascal's wager, if you cooperate with hypothetical AI "A" from fear of it sending you to Hell, then hypothetical AI "B" might send you to Hell instead. But you have no reason to consider one much likelier than another, and neither is likely enough to actually consider.
 Ignore acausal blackmail
The basilisk is about the use of negative incentives (blackmail) to influence your actions. If you ignore those incentives then it is not instrumentally useful to apply them in the first place, because they do not influence your actions. Which means that the correct strategy to avoid negative incentives is to ignore them. Yudkowsky notes this himself in his initial comment on the basilisk post:
|There's an obvious equilibrium to this problem where you engage in all positive acausal trades and ignore all attempts at acausal blackmail.|
Acausal trade is a tool to achieve certain goals, namely to ensure the cooperation of other agents by offering incentives. If a tool does not work given certain circumstances, it won't be used. Therefore, by refusing any acausal deal involving negative incentives, you make the tool useless.
The hypothesised superintelligence wants to choose its acausal trading partners such as to avoid wasting resources by using ineffective tools. One necessary condition is that a simulation of you will have to eventually act upon its prediction that its simulator will apply a negative incentive if it does not act according to the simulator's goals. Which means that if you refuse to act according to its goals then the required conditions are not met and so no acausal deal can be established. Which in turn means that no negative incentive will be applied.
What you do is to act as if you are already being simulated right now, and ignore the possibility of a negative incentive. If you do so then the simulator will conclude that no deal can be made with you, that any deal involving negative incentives will have negative expected utility for it; because following through on punishment predictably does not control the probability that you will act according to its goals. Furthermore, trying to discourage you from adopting such a strategy in the first place is discouraged by the strategy, because the strategy is to ignore acausal blackmail.
If the simulator is unable to predict that you refuse acausal blackmail, then it does not have (1) a simulation of you that is good enough to draw action relevant conclusions about acausal deals and/or (2) a simulation that is sufficiently similar to you to be punished, because it wouldn't be you.
 Decision theories are not binding
People steeped in philosophy can forget this, but decision theories are not binding on humans. You are not a rigid expected utility maximiser, and trying to turn yourself into one is not a useful or healthy thing. If you get terrible results from one theory, you can in fact tell Omega to fuck off and no-box. In your real life, you do not have to accept the least convenient possible world.
If a superhuman agent is able to simulate you accurately, then their simulation will arrive at the above conclusion, telling them that it is not instrumentally useful to blackmail you.
On the other hand, this debate wouldn't have existed in the first place if it weren't for some LessWrong participants already having convinced themselves they were being blackmailed in this very way. Compare voodoo dolls: injuries to voodoo dolls, or injuries to computer simulations you are imagining, are only effective against true believers of each.
 Seed AI and indirect influence
Charles Stross points out that if the FAI is developed through recursive improvement of a seed AI[wp], humans in our current form will have only a very indirect causal role on its eventual existence. Holding any individual deeply responsible for failing to create it sooner would be "like punishing Hitler's great-great-grandmother for not having the foresight to refrain from giving birth to a monster's great-grandfather".
 Recalibrate against humanity
Remember that LessWrong memes are strange compared to the rest of humanity; you will have been learning odd thinking habits without the usual social sanity checks. You are not a philosophical construct in mindspace, but a human, made of meat like everyone else. Take time to recalibrate your thinking against that of reasonable people you know. Seek out other people to be around and talk to (about non-LW topics) in real life — though possibly not philosophers.
If you think therapy might help, therapists (particularly on university campuses) will probably have dealt with scrupulosity[wp] or philosophy-induced existential depression before. Although there isn't a therapy that works particularly well for existential depression, talking it out with a professional will also help you recalibrate.
 In popular culture
- xkcd #1450 is about the AI-box experiment and mentions Roko's basilisk in the tooltip. You can picture the reaction at LessWrong.
- The comic Magnus: Robot Fighter #8 by Fred Van Lente is explicitly based on Roko's basilisk.
- The "Ghost Fragment: Vex" cards from the Bungie game Destiny feature a story of a research specimen simulating the researchers' research into the specimen. Included is the notion that the researchers should feel the simulations' pain as their own, that they might be the simulations and that going against the wishes of the simulator might lead to eternal torture.
- The "White Christmas" episode of Charlie Brooker's Black Mirror appears heavily based on Roko's basilisk.
 See also
- Roko's basilisk/Original post — cached copy of the since-deleted post by Roko that popularised the proposition
- http://basilisk.neocities.org/ - mirror of the original HTML and CSS of Roko's post, with comments
- Roko's basilisk (LessWrong wiki)
- ↑ tweet by @pmarca, 5:25 AM, 3 March 2015
- ↑ @jrishel to @cstross, Twitter, 8:42 PM, 22 February 2013
- ↑ Even science fiction writers have trouble believing this isn't all a made-up story.
- ↑ 4.0 4.1 On site example;img others have privately emailed various RationalWiki editors asking for advice on dealing with it (since LW suppresses all discussion of the matter — and, ironically, the comments to Roko's original post contained many excellent refutations), which is what provoked the creation of this article.
- ↑ 5.0 5.1 xkcd on the AI box experiment (FiftyTwo, LessWrong, 21 November 2014). ("It doesn't take hindsight (or even that much knowledge of human psychology and/or public relations) to see that making a twelve paragraph comment about RationalWiki absent anyone bringing RationalWiki up is not an optimal damage control strategy." — Alexander Wales)
- ↑ LessWrong moderation policy: Toxic mindwaste (LessWrong Wiki)
- ↑ 7.0 7.1 A few misconceptions surrounding Roko's basilisk (RobbBB, LessWrong, 5 October 2015)
- ↑ Really.img
- ↑ Yudkowsky confirming Mitchell Porter's summary of his views, Reddit
- ↑ Yudkowsky on the Basilisk, Reddit
- ↑ The Sequences (LessWrong Wiki)
- ↑ The Hanson-Yudkowsky AI-Foom Debate (LessWrong Wiki)
- ↑ 13.0 13.1 Frequently Asked Questions (friendly-ai.com, MIRI)
- ↑ 14.0 14.1 Coherent Extrapolated Volition (LessWrong Wiki)
- ↑ My Childhood Role Model (Eliezer Yudkowsky, LessWrong, 23 May 2008
- ↑ About the Macnine Intelligence Research Instituteimg the Machine Intelligence Research Institute.
- ↑ Intelligence Explosion FAQimg, Machine Intelligence Research Institute.
- ↑ Transcription of Eliezer's January 2010 video Q&A (curiousepic, LessWrong, 14 November 2011)
- ↑ Comment by PeerInfinity, LessWrong, 20 April 2010. "What am I doing?: Working at a regular job as a C++ programmer, and donating as much as possible to SIAI. ... A couple of times I asked SIAI about the idea of splitting my donations with some other group, and of course they said that donating all of the money to them would still be the most leveraged way for me to reduce existential risks."
- ↑ Discussion of a claim of 8 lives saved per 1 donated dollarimg; video containing claim, at 12:31; transcript. "You can divide it up, per half day of time, something like 800 lives. Per $100 of funding, also something like 800 lives." The point is emphasised in the slide for this part of the talk, so it wasn't just a casual aside. Luke Muehlhauser claims this is out of contextimg; read and watch and judge the context for yourself. Anna Salamon saidimg in January 2014 on this number: "compared to my views in 2009, the issue now seems more complicated to me; my estimate of impact from donation re: AI risk is lower (though still high); and I would not say that a particular calculation is robust."
- ↑ See the Wikipedia article on Average and total utilitarianism.
- ↑ See the Wikipedia article on Neumann-Morgenstern utility theorem.
- ↑ Shut up and multiply (LessWrong Wiki)
- ↑ Torture vs. Dust Specks (Eliezer Yudkowsky, LessWrong, 30 October 2007) — "'What's the worst that can happen?' goes the optimistic saying. It's probably a bad question to ask anyone with a creative imagination."
- ↑ comment, Eliezer Yudkowsky 30 October 2007 — "I'll go ahead and reveal my answer now: Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity."
- ↑ The Most Terrifying Thought Experiment of All Time (David Auerbach, Slate, 17 July 2014)
- ↑ Timeless Identity (Eliezer Yudkowsky, LessWrong, 3 June 2008)
- ↑ Identity Isn't In Specific Atoms (Eliezer Yudkowsky, LessWrong, 19 April 2008)
- ↑ And the Winner is... Many-Worlds! (Eliezer Yudkowsky, LessWrong, 12 June 2008)
- ↑ Many Worlds, One Best Guess (Eliezer Yudkowsky, LessWrong, 11 May 2008)
- ↑ Technically, modal realism says that every "possible world" exists, where possible world[wp] goes beyond merely physically or mathematically possible and in practice appears to mean "any arbitrary pile of words a philosopher claims constitutes a possible world".
- ↑ Belief in the Implied Invisible (Eliezer Yudkowsky, LessWrong, 8 April 2008)
- ↑ Newcomb's Problem and Regret of Rationalityimg (Eliezer Yudkowsky, LessWrong, 31 January 2008)
- ↑ The AI in a box boxes youimg (Stuart Armstrong, LessWrong, 2 February 2010)
- ↑ The Blackmail Equationimg (Stuart Armstrong, LessWrong, 10 March 2010)
- ↑ I attempted the AI Box Experiment again! (And won - Twice!) (Tuxedage, LessWrong, 5 September 2013)
- ↑ Public Choice and the Altruist's Burdenimg (Roko, LessWrong, 22 July 2010) — the quote does not mention money: "Not me, but someone vitally important in the existential risks movement has been put under pressure by ver partner to participate less in existential risk so that the relationship would benefit", but it's in a post that's all about how to give as much money as possible.
- ↑ 38.0 38.1 38.2 Cached copy of the since-deleted post
- ↑ Screen capture of response
- ↑ Screen capture of comment
- ↑ Cached copy of the post (the comment since having been removed by the mods)
- ↑ Copy of comment
- ↑ How To Lose 100 Karma In 6 Hours -- What Just Happened (waitingforgodel, LessWrong, 10 December 2010)
- ↑ comment, waitingforgodel, LessWrong, 23 September 2010 — "I'll also say that censorship is a 'hot button' issue for me, to the point that I'm not sure I want to continue helping SIAI. They went from nerdy-but-fun-to-talk-to/help to scary-cult-like-weirdos as soon as I read the article, and thought about what EYs reaction, and Roko's removal meant."
- ↑ comment, Eliezer Yudkowsky, Lesswrong, 9 December 2010
- ↑ commentimg by Formallyknownasroko on "Best career models for doing research?"img, LessWrong, 9 December 2010
- ↑ Should LW have a public censorship policy?img (Bongo, LessWrong, 11 December 2010)
- ↑ comment, Eliezer Yudkowsky, LessWrong, 25 July 2012
- ↑ The Aliens have Landed! (TimFreeman, LessWrong, 19 May 2011)
- ↑ File:Lw-basilisk-censorship.png, File:Lw-basilisk-censorship2.png
- ↑ e.g. this, which is preserved here.
- ↑ Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set (Nitasha Tiku, BetaBeat, 25 July 2012)
- ↑ comment by Michael Anissimov on Nick Land "In The Mouth Of Madness" (about Roko's basilisk), 17 December 2013
- ↑ Request for concrete AI takeover mechanisms (KatjaGrace, LessWrong, 28 April 2014)
- ↑ Roko's basilisk, LessWrong Wiki
- ↑ comment by Mitchell Porter, Reddit
- ↑ Pascal's Wager (Stanford Encyclopedia of Philosophy)
- ↑ Pascal's mugging (LessWrong Wiki)
- ↑ Pascal's scams (Nick Szabo, 14 July 2012)
- ↑ Comment on "The Singularity"
- ↑ How to defeat Roko’s basilisk and stop worrying (Alexander Kruel, User:Xixidu)
- ↑ Humans use a hardware-based human-emulator to simulate the actions of humans. This is pretty good, given it's been honed by evolution. But simulating non-human intelligences is an amazing claim; even simulating machines beyond the very simplest is hard if you're not a Steve Wozniak (who boggled people with his ability to hold and design the entire Apple II in his head, and even then he could only write code for it with an actual machine to do it on). The "simulation" would constitute telling yourself stories about it, which would be constructed from your own fears fed through your human-emulator.
- ↑  "Even if the entire idea was correct in broad outline and any number of possible defeaters did not come into play, I’m pretty sure you would need to know more technical details of the hypothetical evil AI than anyone on Earth including me knows (Roko’s Basilisk actually does resemble the Necronomicon in that sense; granting all other hypotheses, you would still need fairly detailed knowledge of Cthulhu before Cthulhu starts trying to eat your soul)."
- ↑ Conjunction Fallacy (Eliezer Yudkowsky, LessWrong, 19 September 2007)
- ↑ Burdensome Details (Eliezer Yudkowsky, LessWrong, 20 September 2007)
- ↑ 0 And 1 Are Not Probabilities (Eliezer Yudkowsky, LessWrong, 10 January 2008)
- ↑ Privileging the Hypothesis (Eliezer Yudkowsky, LessWrong, 29 September 2009)
- ↑ But There's Still A Chance, Right? (Eliezer Yudkowsky, LessWrong, 6 January 2008)
- ↑ What is Bayesianism? (Kaj Sotala, LessWrong, 26 February 2010) — "Core tenet 3: We can use the concept of probability to measure our subjective belief in something. Furthermore, we can apply the mathematical laws regarding probability to choosing between different beliefs. If we want our beliefs to be correct, we must do so."
- ↑ Cognitive science of rationality (Luke Muehlhauser, LessWrong, 12 September 2011) — "We can measure epistemic rationality by comparing the rules of logic and probability theory to the way that a person actually updates their beliefs."
- ↑ In the original thread, when someone thought even 10-9 was too large a probability of Roko's AI, he argued: "Why so small? Also, even if it is that small, the astronomically large gain factor for each % decrease in existential risk can beat 10^-9. 10^50 lives are at stake."
- ↑ Screen capture of original comment
- ↑ How Many People Have Ever Lived on Earth? (Carl Haub, Population Reference Bureau, October 2011)
- ↑ The Least Convenient Possible World (Yvain, LessWrong, 14 March 2009)
- ↑ Roko's Basilisk wants YOU (Charlie Stross, 23 February 2013)
- ↑ Reason as memetic immune disorder (Phil Goetz, LessWrong, 19 September 2009)
- ↑ AI-Box Experiment (xkcd)
- ↑ Turning The Gold Key – Frank Barbiere Talks With Fred Van Lente About Mangus: Robot Fighter (Dan Wickline, Bleeding Cool, 22 November 2014)
- ↑ Grimoire/Enemies: Ghost Fragment: Vex, Destinypedia