Roko's basilisk

From RationalWiki
Jump to: navigation, search
It's not totally wrong, it's

LessWrong

TrollRidingBask.jpg
Get rational
this is like a grown up version of The Game, which you just made us lose, and I retweeted so all my friends lost too.[1]

Roko's basilisk is a proposition that says an all-powerful artificial intelligence from the future may retroactively punish those who did not assist in bringing about its existence, described by a member of the rationalist community LessWrong. As an argument used to try and suggest people should subscribe to particular singularitarian ideas, or even donate money to them, it resembles a futurist version of Pascal's wager.

Despite widespread incredulity,[2] this entire saga is about things that are actually believed by some groups of people. Though it must be noted that LessWrong does not believe in or advocate the basilisk — just in almost all of the premises that add up to it.

Contents

[edit] Summary

The claim is that a hypothetical, or inevitable, ultimate intelligence may punish those who fail to help it or help create it, with greater punishment given to those who knew the importance of the task in advance. In a sense, it could be considered singularitarian hell (and we don't mean just a robotics conference on the practicalities of developing technology).

However, this idea goes a bit beyond just "serve the AI or you will go to hell" — the AI and the person punished have no causal interaction; the punishment would be of a simulation of the person, which the AI would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT),[3] this is taken to be equivalent to punishment of your own actual self, not just someone else very like you. Furthermore, you might be the simulation.

Roko's basilisk is notable for being completely banned from discussion on LessWrong, where any mention of it is deleted.[4] Eliezer Yudkowsky, founder of LessWrong, considers the basilisk to not work, but will not explain why because he does not consider open discussion of the notion of acausal trade with possible superintelligences to be provably safe.

Silly over-extrapolations of local memes, jargon and concepts are posted to LessWrong quite a lot; almost all are just downvoted and ignored. But for this one, Yudkowsky reacted to it hugely, then doubled-down on his reaction. Thanks to the Streisand effect, discussion of the basilisk and the details of the affair soon spread outside of LessWrong. Indeed, it's now discussed outside LessWrong frequently, almost anywhere that LessWrong is discussed at all. The entire affair constitutes a worked example of spectacular failure at community management and at controlling purportedly dangerous information.

Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they're fairly sure intellectually that it's a silly problem.[5] The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture.[6]

[edit] Background

Although they disclaim the basilisk itself, the long-term core contributors to LessWrong believe in a certain set of transhumanist notions which are prerequisites to it. These include:

  • An artificial intelligence will be developed that will bootstrap itself to immeasurable power and knowledge.[7] For it not to inadvertently destroy humanity, it needs a value system that completely preserves human ideas of value[8] even though said intelligence will be as far above us as we are above ants.[9] LessWrong's parent organisation, the Machine Intelligence Research Institute (formerly the Singularity Institute), exists to make this friendly local god happen before a bad local god happens.[10][11] Thus, the most important thing in the world is to bring this future AI into existence properly and successfully ("this is crunch time for the entire human species"),[12] and therefore you should give all the money you can to the Institute, who literally claim "8 lives saved per dollar donated".[13]
  • To make things go right, this AI has to be provably Friendly, a Yudkowsky neologism meaning "preserves human value no matter what".[14] "Unfriendly" in this context does not mean "hostile", but merely "not proven Friendly". The plan for making a Friendly AI is to have it implement Coherent Extrapolated Volition (CEV),[8] a (hypothetical) coherent and complete description of what would constitute value to humans — basically, solving ethical philosophy.
  • Arithmetical (total) utilitarianism:[15] that you can meaningfully calculate the utility of actions as a number, just as if humans were utility-maximising machines,[16] and do arithmetic on it with useful results. You should then "shut up and multiply"[17] utterly negligible probabilities by hypothetical huge outcomes, and take the resulting number seriously — Yudkowsky writes at length[18] on a scenario in which you should torture one person for 50 years to prevent dust specks in the eyes of a sufficiently large number of people [19] — resulting in claims like "8 lives saved per dollar".
  • Copies of you are also you, not separate people very like you, and you should feel that these are your own self.[20] Sufficiently accurate simulations of you are also you.
  • The many worlds interpretation of quantum mechanics is trivially obviously true,[21] anything that could happen does happen in some quantum Everett branch[22] (modal realism[wp] is true[23]), and copies of you in these branches should be considered to exist (and be you) even though you cannot interact with them.[24]
  • Future events "cause" past events: if you can plausibly forecast a future event, and then take actions in the present because of this prediction, then the future event "caused" your action, in some sense. Thus, you could "trade" acausally with a being if you could reasonably simulate each other. This sort of thinking gets odd when you imagine superintelligences, because their predictions of human behaviour may be near-perfect (giving Newcomb's paradox[wp], a philosophical construction in which you have to win against a being that can predict you near-perfectly); their power may be near-infinite; and the consequences could be near-eternal: actually doing a deal with God. The posited solution involves firm precommitment to plans of action, to such a degree that any fidelitous simulation of you would behave per the commitment. Yudkowsky is formulating this into an attempted complete philosophical decision theory, Timeless Decision Theory, intended to solve Newcomb-like paradoxes. TDT is closely related to Douglas Hofstadter's superrationality[wp].

[edit] Solutions to the Altruist's burden: the Quantum Billionaire Trick

Memetic hazard warning.png

On 22 July 2010, Roko, a well-respected and prolific LessWrong poster, posted "Public Choice and the Altruist's Burden" — heavily laden with LW jargon and references to LW concepts, and almost incomprehensible to the casual reader — which spoke of how, as MIRI (then SI) is the most important thing in the world, a good altruist's biggest problem is how to give everything they can to the cause without guilt at neglecting their loved ones, and how threats of being dumped for giving away too much of the couple's money had been an actual problem for some SI donors.[25]

The next day, 23 July 2010, Roko posted "Solutions to the Altruist's burden: the Quantum Billionaire Trick", which presents a scheme for action that ties together quantum investment strategy (if you gamble, you will definitely win in some Everett branch), acausal trade with unFriendly AIs in other Everett branches ... and the threat of punishment by well-meaning future superintelligences.[26]

The post describes speculations that a future Friendly AI might punish people who didn't do everything in their power to further the creation of this AI. Every day without the Friendly AI, bad things happen — 150,000+ people die irretrievably every day, war is fought, millions go hungry — so the AI might punish those who understood the importance of donating but didn't donate all they could. Specifically, it might make simulations of them, first to predict their behaviour, then to punish the simulation for the predicted behaviour so as to influence the original person. He then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish them. The core idea is expressed in the following paragraph:

[T]here is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. ... So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. Of course this would be unjust, but is the kind of unjust thing that is oh-so-very utilitarian.

Thus, donors who are donating but not donating enough may be condemning themselves to Hell — and Roko notes that some Singularity Institute people had already worried about this scenario (though it became convention to blame Roko for the idea). Roko proposes a solution permitting such donors to escape this Hell for the price of a lottery ticket.

Commenters quickly complained that merely reading Roko's words had increased the likelihood that the future AI would punish them — the line of reasoning was so compelling to them that they believed the AI (which would know they'd once read Roko's post) would now punish them even more for being aware of it and failing to donate all of their income to institutions devoted to the god-AI's development. So even looking at this idea was harmful, lending the proposition the "basilisk" label, after the "basilisk" image from David Langford's science fiction stories, which was in turn named after the legendary serpent-creature from European mythology that killed those who saw it (also familiar from Harry Potter novels). The more sensitive and OCD-prone on LessWrong began to have nightmares.

Yudkowsky promptly hit the roof.[27] Within four hours, Roko's post and all discussion was deleted by an extremely pissed-off Yudkowsky, with this comment:[28]

The original version of this post caused actual psychological damage to at least some readers. This would be sufficient in itself for shutdown even if all issues discussed failed to be true, which is hopefully the case.

Please discontinue all further discussion of the banned topic.

All comments on the banned topic will be banned.

Exercise some elementary common sense in future discussions. With sufficient time, effort, knowledge, and stupidity it is possible to hurt people. Don't.

As we used to say on SL4: KILLTHREAD.

It is possible the basilisk originated in someone playing the AI-box experiment; one strategy as the "AI" is to throw a basilisk at the "gatekeeper".[29]

[edit] Aftereffects

DON'T BLINK!
The original "basilisk" involved imagining a post-singularity AI in the future of our world which will send you to transhuman hell after the singularity, if you don’t do everything you could in the past (i.e. our present) to make it a friendly singularity. Rather than openly and rationally discuss whether this is a sensible "threat" at all, or just an illusion, the whole topic was hurriedly hidden away. And thus a legend was born.
—Mitchell Porter on LessWrong[30]

All discussion of the notion was censored from LessWrong, with strings of deleted posts. This worked about as well as anyone with a working familiarity with the Internet would expect.

One frustrated poster protested the censorship of the idea with a threat to increase existential risk — to do things to make some end-of-the-world catastrophe ever so slightly more likely — by sending some emails to right-wing bloggers which they thought might make some harmful regulation more likely to pass.[31] The poster said they'd do this every time they saw a post get censored.[32][33] LessWrong took this threat seriously, though Yudkowsky didn't yield.[34]

The matter is now the occasional subject of contorted LW posts, as people try to discuss the issue cryptically without talking about what they're talking about.[35][36] The moderators occasionally sweep through LessWrong removing basilisk discussion,[37][38] with the occasional page full of "comment deleted" marking where they've tried to burn the evidence.

The censored discussions are generally full of counterarguments to the basilisk. Thus, this left those seriously worried about the basilisk with greatly reduced access to arguments refuting the notion.

The basilisk has become a reliable space-filler for journalists covering LessWrong-related stories.[39] The bottom of this postimg, about the news coverage, is particularly hilarious as a memorial to burning the evidence. Compare to the original (deleted portion starting from comment by RomeoStevens).

Eventually, two and a half years after the original post, Yudkowsky started an official LessWrong uncensored thread on Reddit, in which he finally participated in discussion concerning the basilisk. Continuing his habit of spurious neologism, he attempted to introduce his own emotionally-charged terminology for something that already had an accepted name, calling the basilisk "the Babyfucker". Meanwhile, his main reasoning tactic was to repeatedly assert that his opponents' arguments were flawed, while refusing to give arguments for his claims (another recurring Yudkowsky pattern), ostensibly out of fears of existential risk.

[edit] What makes a basilisk tick?

I will say this over again with specifics, so you can see what's going on. Let's suppose that human H is Tom Carmody from New York, and evil entity E is Egbert, an UFAI which will torture puppies unless Tom buys the complete works of Robert Sheckley. Neither Tom nor Egbert ever actually meet. Egbert "knows" of Tom because it has chosen to simulate a possible Tom with the relevant properties, and Tom "knows" of Egbert because he happens to have dreamed up the idea of Egbert's existence and attributes. So Egbert is this super-AI which has decided to use its powers to simulate an arbitrary human being which happened by luck to think of a possible AI with Egbert's properties (including its obsession with Tom), and Tom is a human being who has decided to take his daydream of the existence of the malevolent AI Egbert seriously enough, that he will actually go and buy the complete works of Robert Sheckley, in order to avoid puppies being tortured in Egbert's dimension.
—Mitchell Porter on Reddit[40]

At first glance, to the non-LessWrong-initiated reader, the motivations of the AI in the basilisk scenario do not appear rational. The AI will be punishing people from the distant past by recreating them, long after they did or did not do the things they are being punished for doing or not doing. So the usual reasons for punishment or torture, such as deterrence, rehabilitation, or enforcing cooperation, do not appear to apply. The AI appears to be acting purely for purposes of revenge, something we would not expect a purely logical being to engage in.

To understand the basilisk, one must bear in mind the application of Timeless Decision Theory and acausal trade. To greatly simplify it, a future AI entity with a capacity for extremely accurate predictions would be able to influence our behaviour in the present (hence the timeless aspect) by predicting how we would behave when we predicted how it would behave. And it has to predict that we will care what it does to its simulation of us.

A future AI who rewards or punishes us based on certain behaviours could make us behave as it wishes us to, if we predict its future existence and take actions to seek reward or avoid punishment accordingly. Thus the hypothesised AI could use the punishment (in our future) as a deterrent in our present to gain our cooperation, in much the same way as a person who threatens us with violence (e.g., a mugger) can influence our actions, even though in the case of the basilisk there is no direct communication between ourselves and the AI, who each exist in possible universes that cannot interact.

[edit] Pascal's basilisk

You know what they say the modern version of Pascal's Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God.
—Greg Egan, Crystal Nights

The basilisk dilemma bears some resemblance to Pascal's wager, the theory proposed by 17th century mathematician Blaise Pascal that one should devote oneself to God, even though we cannot be certain of God's existence, since God may offer us eternal reward (in heaven) or eternal punishment (in hell). According to Pascal's reasoning, the probability of God's existence does not matter, since any finite cost (in Pascal's case, the burden of leading a Christian life, which most people in his age did anyway) is far outweighed by the prospect of infinite reward or infinite punishment.

The usual refutation is the "many gods" argument:[41] Pascal focused unduly on the characteristics of one possible variety of god (a Christian god who punishes and rewards based on belief alone), ignoring other possibilities, such as a god who punishes those who feign belief Pascal-style in the hope of reward.

The basilisk proposition involves a much greater, though still finite, cost; that of investing every penny that you have into one thing. As with Pascal's wager, this is to be done not out of sincere devotion, but out of calculated expediency. The hypothetical punishment does not appear to be infinite, though very large. Roko's post did not suggest reward, though some suggest that the AI would reward those who donated to AI research as well as punish those who did not. The Lovecraftian reward in the basilisk scenario is simply being spared from punishment. Hence the motivation in this dilemma is heavily skewed towards the stick rather than the carrot. Also, a dystopian future in which a hyperintelligent entity metes out cruel punishments is not much to look forward to, even if you are one of those fortunate enough to be spared.

Then there is the issue of the extreme improbability of this scenario occurring at all. This is addressed by another trope from LessWrong: "Pascal's mugging," which suggests that it is irrational to permit events of slight probability but huge posited consequences from skewing your judgment.[42] Economist Nick Szabo calls these "Pascal's scams."[43], and has confirmed he was talking about singularity advocates.[44]

[edit] So you're worrying about the Basilisk

(This section is written more in-universe, to help those who are here worried about the idea.)

Some people, steeped in LessWrong-originated ideas, have spiraled into severe distress at the basilisk, even if intellectually they realise it's a silly idea. (It turns out you can't always reason your way out of things you did reason yourself into, either.) The good news is that others have worked through it and calmed down okay,[45] so the main thing is not to panic.

It is somewhat unfortunate in this regard that the original basilisk post was deleted, as the comments[26] to it include extensive refutation of the concepts therein. These may help; the basilisk idea is not at all robust.

[edit] Chained conditions are less probable

The assumptions the basilisk requires to work:

  • that you can meaningfully model a superintelligence in your human brain (remembering that this is comparable to an ant modelling a human)
  • that the probability of this particular AI (and it's a very particular AI) ever coming into existence is non-negligible — say, greater than 1030 to 1 against
  • that said AI would be able to deduce and simulate a very close copy of you
  • that timeless decision theory is so obviously true that even a malicious superintelligence would immediately deduce and adopt it, as it would a correct theory in physics
    • that despite having been constructed specifically to solve particular weird edge cases, TDT is a good guide to normal decisions
    • that said AI has no better use for particular resources than to torture a simulation it created itself
    • that torturing the copy should feel the same to you as torturing the you that's here right now
    • that acausal trade is even a meaningful concept
    • that this is worth thinking about even if it occurs in a universe totally disconnected from this one.

That's a lot of conditions to chain together. As Yudkowsky has noted, the more conditions, the lower the probability.[46] Chained conditions make a story more plausible and compelling, but therefore less probable.

So the more convincing a story is (particularly to the point of obsession), the less likely it is.

[edit] Negligible probabilities and utilitarianism

Yudkowsky argues that 0 is not a probability: if something is not philosophically impossible, then its probability is not actually 0.[47] The trouble is that humans are very bad at dealing with non-zero but negligible probabilities, treating them as non-negligible — privileging the hypothesis[48] — much like the theist's reply to the improbability of God, "But you can't prove it's impossible!"[49] Humans naturally treat a negligible probability as still worth keeping track of — a cognitive bias coming from evolved-in excess caution. The basilisk is ridiculously improbable, but humans find scary stories compelling and therefore treat them as non-negligible.

Probabilities of exclusive events should add up to 1. But LessWrong advocates treating subjective beliefs like probabilities,[50][51] even though humans treat negligible probabilities as non-negligible — meaning your subjective degrees of belief sum to much more than 1. Using formal methods to evaluate informal evidence lends spurious beliefs an improper veneer of respectability, and makes them appear more trustworthy than our intuition. Being able to imagine something does not make it worth considering.

Even if you think you can do arithmetic with numerical utility based on subjective belief,[52] you need to sum over the utility of all hypotheses. Before you get to calculating the effect of a single very detailed, very improbable hypothesis, you need to make sure you've gone through the many much more probable hypotheses, which will have much greater effect.

Yudkowsky noted in the original discussion[53] that you could postulate an opposing Friendly AI just as reasonably as Roko postulated his unFriendly AI. The basilisk involves picking one hypothetical AI out of a huge possibility space which humans don't even understand yet, and treating it as being likely enough to consider as an idea. Perhaps 100 billion humans have existed since 50,000 BC;[54] how many humans could possibly exist? Thus, how many possible superintelligent AIs could there be? The probability of the particular AI in the basilisk is too tiny to think about. One single highly speculative scenario out of an astronomical number of diverse scenarios differs only infinitesimally from total absence of knowledge; after reading of Roko's basilisk you are, for all practical purposes, as ignorant of the motivations of future AIs as you were before.

Just as in Pascal's wager, if you cooperate with hypothetical AI "A" from fear of it sending you to Hell, then hypothetical AI "B" might send you to Hell instead. But you have no reason to consider one much likelier than another, and neither is likely enough to actually consider.

[edit] Ignore acausal blackmail

The gist of the basilisk is the use of negative incentives (blackmail) to influence your actions. If you ignore those incentives then it is not instrumentally useful to apply them in the first place, because they do not influence your actions. Which means that the correct strategy to avoid negative incentives is to ignore them.

Ignoring negative incentives to avoid their eventual application works because acausal trade is a tool to achieve certain goals, namely to ensure the cooperation of other agents by offering incentives. If a tool does not work given certain circumstances then it won't be used. Therefore by refusing any acausal deal involving negative incentives, you make the tool useless.

The hypothesised superintelligence wants to choose its acausal trading partners such as to avoid wasting resources by using ineffective tools. One necessary condition is that a simulation of you will have to eventually act upon its prediction that its simulator will apply a negative incentive if it does not act according to the simulator's goals. Which means that if you refuse to act according to its goals then the required conditions are not met and so no acausal deal can be established. Which in turn means that no negative incentive will be applied.

What you do is to act as if you are already being simulated right now, and ignore the possibility of a negative incentive. If you do so then the simulator will conclude that no deal can be made with you, that any deal involving negative incentives will have negative expected utility for it; because following through on punishment predictably does not control the probability that you will act according to its goals. Furthermore, trying to discourage you from adopting such a strategy in the first place is discouraged by the strategy, because the strategy is to ignore acausal blackmail.

If the simulator is unable to predict that you refuse acausal blackmail, then it does not have (1) a simulation of you that is good enough to draw action relevant conclusions about acausal deals (2) a simulation that is sufficiently similar to you to be punished, because it wouldn't be you.

[edit] Decision theories are not binding

People steeped in philosophy can forget this, but decision theories are not binding on humans. You are not a rigid expected utility maximiser, and trying to turn yourself into one is not a useful or healthy thing. If you get terrible results from one theory, you can in fact tell Omega to fuck off and no-box. In your real life, you do not have to accept the least convenient possible world.[55]

If a superhuman agent is able to simulate you accurately, then their simulation will arrive at the above conclusion, telling them that it is not instrumentally useful to blackmail you.

[edit] Seed AI and indirect influence

Charles Stross points out[56] that if the FAI is developed through recursive improvement of a seed AI, humans in our current form will have only a very indirect causal role on its eventual existence. Holding any individual deeply responsible for failing to create it sooner would be "like punishing Hitler's great-great-grandmother for not having the foresight to refrain from giving birth to a monster's great-grandfather".

[edit] Recalibrate against humanity

Remember that LessWrong memes are strange compared to the rest of humanity; you will have been learning odd thinking habits without the usual social sanity checks.[57] You are not a philosophical construct in mindspace, but a human, made of meat like everyone else. Take time to recalibrate your thinking against that of reasonable people you know. Seek out other people to be around and talk to (about non-LW topics) in real life — though possibly not philosophers.

If you think therapy might help, therapists (particularly on university campuses) will have dealt with philosophy-induced existential depression before. Although there isn't a therapy that works particularly well for existential depression, talking it out with a professional will also help you recalibrate.

[edit] Footnotes

  1. @jrishel to @cstross, Twitter, 8:42 PM, Feb 22, 2013
  2. Even science fiction writers have trouble believing this isn't all a made-up story.
  3. http://wiki.lesswrong.com/wiki/Timeless_decision_theory
  4. LessWrong moderation policy: Toxic mindwaste.
  5. On site example;img others have privately emailed various RationalWiki editors asking for advice on dealing with it (since LW suppresses all discussion of the matter — and, ironically, the comments to Roko's original post contained many excellent refutations), which is what provoked the creation of this article.
  6. Really.img
  7. http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate
  8. 8.0 8.1 http://wiki.lesswrong.com/wiki/Coherent_Extrapolated_Volition
  9. http://lesswrong.com/lw/ql/my_childhood_role_model/
  10. Aboutimg the Machine Intelligence Research Institute.
  11. Intelligence Explosion FAQimg. Machine Intelligence Research Institute.
  12. http://lesswrong.com/lw/8fd/transcription_of_eliezers_january_2010_video_qa/
  13. Discussion of a claim of 8 lives saved per 1 donated dollarimg
  14. Frequently Asked Questions (friendly-ai.com, MIRI)
  15. See the Wikipedia article on Average and total utilitarianism.
  16. See the Wikipedia article on Neumann-Morgenstern utility theorem.
  17. http://wiki.lesswrong.com/wiki/Shut_up_and_multiply
  18. http://lesswrong.com/lw/kn/torture_vs_dust_specks/
  19. http://lesswrong.com/lw/kn/torture_vs_dust_specks/uf7
  20. Timeless Identity (Eliezer Yudkowsky, LessWrong, 3 June 2008)
  21. And the Winner is... Many-Worlds! (Eliezer Yudkowsky, LessWrong, 12 June 2008)
  22. Many Worlds, One Best Guess (Eliezer Yudkowsky, LessWrong, 11 May 2008)
  23. Technically, modal realism says that every "possible world" exists, where possible world[wp] goes beyond merely physically or mathematically possible and in practice appears to mean "any arbitrary pile of words a philosopher claims constitutes a possible world".
  24. Belief in the Implied Invisible (Eliezer Yudkowsky, LessWrong, 8 April 2008)
  25. Public Choice and the Altruist's Burdenimg (Roko, LessWrong, 22 July 2010) — the quote does not mention money: "Not me, but someone vitally important in the existential risks movement has been put under pressure by ver partner to participate less in existential risk so that the relationship would benefit", but it's in a post that's all about how to give as much money as possible. This post is curiously absent from the list of 2010 LessWrong posts, despite still being publicly available and other Roko posts from 2010 being listed.
  26. 26.0 26.1 Cached copy of the since-deleted post
  27. http://kruel.co/lw/r03.png
  28. http://kruel.co/lw/r04.png
  29. http://lesswrong.com/r/discussion/lw/ij4/i_attempted_the_ai_box_experiment_again_and_won/
  30. Cached copy of the post (the comment since having been removed by the mods)
  31. http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33vl
  32. http://lesswrong.com/lw/39l/how_to_loose_100_karma_in_6_hours_what_just/
  33. http://lesswrong.com/lw/2ft/open_thread_july_2010_part_2/2o25?c=1
  34. http://lesswrong.com/lw/38u/best_career_models_for_doing_research/33v3
  35. http://lesswrong.com/lw/39z/should_lw_have_a_public_censorship_policy/img
  36. http://lesswrong.com/lw/ds4/article_about_lw_faith_hope_and_singularity/73ao
  37. File:Lw-basilisk-censorship.png, File:Lw-basilisk-censorship2.png
  38. e.g. this, which is preserved here.
  39. http://betabeat.com/2012/07/singularity-institute-less-wrong-peter-thiel-eliezer-yudkowsky-ray-kurzweil-harry-potter-methods-of-rationality/?show=all
  40. http://www.reddit.com/r/LessWrong/comments/17y819/lw_uncensored_thread/c8ax3wt
  41. http://plato.stanford.edu/entries/pascal-wager/
  42. http://wiki.lesswrong.com/wiki/Pascal%27s_mugging
  43. http://unenumerated.blogspot.co.uk/2012/07/pascals-scams.html
  44. http://unenumerated.blogspot.co.uk/2011/01/singularity.html#c1642399731695430742
  45. How to defeat Roko’s basilisk and stop worrying (Alexander Kruel, User:Xixidu)
  46. Conjunction fallacy; Burdensome details.
  47. 0 And 1 Are Not Probabilities
  48. "Privileging the Hypothesis", Eliezer Yudkowsky, LessWrong, 29 September 2009.
  49. But There's Still A Chance, Right? (Eliezer Yudkowsky, LessWrong, 6 January 2008)
  50. What is Bayesianism? (Kaj Sotala, LessWrong, 26 February 2010) — "Core tenet 3: We can use the concept of probability to measure our subjective belief in something. Furthermore, we can apply the mathematical laws regarding probability to choosing between different beliefs. If we want our beliefs to be correct, we must do so."
  51. Cognitive science of rationality (Luke Muehlhauser, LessWrong, 12 September 2011) — "We can measure epistemic rationality by comparing the rules of logic and probability theory to the way that a person actually updates their beliefs."
  52. In the original thread, when someone thought even 10-9 was too large a probability of Roko's AI, he argued: "Why so small? Also, even if it is that small, the astronomically large gain factor for each % decrease in existential risk can beat 10^-9. 10^50 lives are at stake."
  53. http://kruel.co/lw/r05.png
  54. http://www.prb.org/Articles/2002/HowManyPeopleHaveEverLivedonEarth.aspx
  55. The Least Convenient Possible World (Yvain, LessWrong, 14 March 2009)
  56. http://www.antipope.org/charlie/blog-static/2013/02/rokos-basilisk-wants-you.html
  57. http://lesswrong.com/lw/18b/reason_as_memetic_immune_disorder/
Personal tools
Namespaces

Variants
Actions
Navigation
Community
Tools
support