Talk:Roko's basilisk/Archive3

From RationalWiki
Jump to: navigation, search

This is an archive page, last updated 27 March 2015. Please do not make edits to this page.
Archives for this talk page: <1>, <2>, (new)(back)

Contents

[edit] Infinity...

For your personal amusement:

1.) Big Brain Theory: Have Cosmologists Lost Theirs?

"If the Universe is Spatially Infinite… …there are an infinite number of identical copies of you on an infinite number of identical copies of Earth. You all always make identical decisions. …there are an infinite number of identical copies of Earth, except that each of them is also occupied by Thor. …as above, but it’s the Thor from Marvel Comics. …there are an infinite number of Earths with alternate histories because they have dragons on them. …on an infinite number of those Earths, the dragons are all nazis. …billions of times every second, an infinite number of identical copies of you spring into existence in the depths of space and immediately die freezing and suffocating. …there are an infinite number of people who are just like you except they’re serial killers. …identical copies of everyone you love are being tortured to death right now. …by identical copies of you. …there’s still no god. …there’s no hope of ever fixing the universe’s horrors, because if it were possible it would have been done already. …an infinite number of identical copies of me are hoping that the universe isn’t infinite.

2.) Is the existence of God logically impossible?

...if an infinite selection of all logically possible universes exists, then many of them will contain gods, if gods are logically possible. Probability combined with the law of large numbers combined with the realities of cosmological scales of space and time entails some very weird things. Which are nevertheless certainly true. I’m not speaking of Nick Bostrom’s bizarre argument that we must be living in a simulated universe (Are you Living in a Simulation?), which doesn’t really work, because it requires accepting the extremely implausible premise that most civilizations will behave in the most horrifically immoral way imaginable, and for no practical reason whatever (in all good sense, by far almost all sims that anyone will ever generate will be games and paradises, not countless trillions of aimlessly tedious worlds with thousands of years of pointless wars, holocausts, plagues, and famines). Rather, I’m speaking of Boltzmann Brains. If the universe were to slowly expand forever, even if it were to fade into a heat death of total equilibrium, even then, simply due to the laws of probability, the random bouncing around of matter and energy would inevitably assemble a working brain. Just by chance. It’s only a matter of time. Maybe once every trillion trillion years in any expanse of a trillion trillion light years. But inevitably. And in fact, it would happen again and again, forever. So when all is said and done, there will be infinitely many more Boltzmann brains created in this universe than evolved brains like ours. The downside, of course, is that by far nearly all these brains will immediately die in the icy vacuum of space (don’t worry, by far most of these won’t survive long enough to experience even one moment of consciousness). And they would almost never have any company. (...) But the worlds lucky enough to get them will experience some pretty cool, or some pretty horrific, fates. In some, this god will be randomly evil and create civilizations just to torment them for fun (and let me reiterate: this may already have happened; in fact it may already be happening right now, in universes or regions of spacetime vastly beyond ours). In others, this god will be randomly awesome and create a paradise for his gentle children. (...) This will happen. It probably already has happened. It probably is happening as I type this. It’s a logically necessary truth.


3.) Divided by Infinity

In the year after Lorraine's death I contemplated suicide six times. Contemplated it seriously, I mean: six times sat with the fat bottle of Clonazepam within reaching distance, six times failed to reach for it, betrayed by some instinct for life or disgusted by my own weakness. I can't say I wish I had succeeded, because in all likelihood I did succeed, on each and every occasion. Six deaths. No, not just six. An infinite number. Times six. There are greater and lesser infinities. But I didn't know that then.

4.) INFINITE ETHICS

If EDR were accepted, speculations about infinite scenarios, however unlikely and far‐fetched, would come to dominate our ethical deliberations. We might become extremely concerned with bizarre possibilities in which, for example, some kind of deity exists that will use its infinite powers to good or bad ends depending on what we do. No matter how fantastical any such scenario would be, if it is a logically coherent and imaginable possibility it should presumably be assigned a finite positive probability, and according to EDR, the smallest possibility of infinite value would smother all other considerations of mere finite values. (...) Suppose that I know that a certain course of action, though much less desirable in every other respect than an available alternative, offers a one‐in‐a‐million chance of avoiding catastrophe involving x people, where x is finite. Whatever else is at stake, this possibility will overwhelm my calculations so long as x is large enough. Even in the finite case, therefore, we might fear that speculations about low‐probability‐high‐stakes scenarios will come to dominate our moral decision making if we follow aggregative consequentialism.

5.) Timeline of the far future

10^10^50 Estimated time for a Boltzmann brain to appear in the vacuum via a spontaneous entropy decrease. 10^10^10^76.66 Scale of an estimated Poincaré recurrence time for the quantum state of a hypothetical box containing an isolated black hole of stellar mass. This time assumes a statistical model subject to Poincaré recurrence. A much simplified way of thinking about this time is that in a model in which history repeats itself arbitrarily many times due to properties of statistical mechanics, this is the time scale when it will first be somewhat similar (for a reasonable choice of "similar") to its current state again. 10^10^10^10^10^1.1 Scale of an estimated Poincaré recurrence time for the quantum state of a hypothetical box containing a black hole with the estimated mass of the entire Universe, observable or not, assuming Linde's chaotic inflationary model with an inflaton whose mass is 10^−6 Planck masses.

6.) The Lifespan Dilemma

So right now you’ve got an 80% probability of living 10^^10 years. But if you give me a penny, I’ll tetrate that sucker! That’s right – your lifespan will go to 10^^(10^^10) years! That’s an exponential tower (10^^10) tens high! You could write that as 10^^^3, by the way, if you’re interested. Oh, and I’m afraid I’ll have to multiply your survival probability by 99.99999999%. What? What do you mean, no? The benefit here is vastly larger than the mere 10^^(2,302,360,800) years you bought previously, and you merely have to send your probability to 79.999999992% instead of 10-1000 to purchase it! Well, that and the penny, of course. If you turn down this offer, what does it say about that whole road you went down before? Think of how silly you’d look in retrospect! Come now, pettiness aside, this is the real world, wouldn’t you rather have a 79.999999992% probability of living 10^^(10^^10) years than an 80% probability of living 10^^10 years? Those arrows suppress a lot of detail, as the saying goes! If you can’t have Significantly More Fun with tetration, how can you possibly hope to have fun at all? Hm? Why yes, that’s right, I am going to offer to tetrate the lifespan and fraction the probability yet again… I was thinking of taking you down to a survival probability of 1/(10^^^20), or something like that… oh, don’t make that face at me, if you want to refuse the whole garden path you’ve got to refuse some particular step along the way. Wait! Come back! I have even faster-growing functions to show you! And I’ll take even smaller slices off the probability each time! Come back!

7.) The Finale of the Ultimate Meta Mega Crossover — Unsigned, by: XiXiDu / talk / contribs 2013-02-28T12:34:45‎

Summary: people do not understand that it is possible for something to not occur even in an infinite probability space. You can flip heads or tails on a coin. You will never flip bacon or tree. You can do 3 hat hat hat hat hat ten flips and it will still never happen because the tiny possibility of reality spontaneously altering does not actually grow with multiple trials. Also you can't flip more coins than there are particles in the universe anyway. King Skeleton (talk) 07:52, 29 December 2014 (UTC)

[edit] Just deserts

Anyone stupid enough to think that a future simulation of them is them deserves any nightmares this produces in them. — Unsigned, by: 70.185.176.250 / talk / contribs

That's not very nice. Nullahnung (talk) 09:57, 5 September 2014 (UTC)
I somewhat agree, but in the hope that such nightmares drive said thinker to consider therapy and rejoin the real world. It's easy to get... lost, when dealing with philosophical subjects. Or at least consider this less weighty matter, "I think I think, therefor, I think I am, I think. Which is close enough for government work". That typically derails the logic train for a bit.Wzrd1 (talk) 22:58, 17 October 2014 (UTC)
That's why the last section is basically "you should get out more" - David Gerard (talk) 13:52, 29 October 2014 (UTC)
Actually, there are a few options applicable here:
  1. Essentialism is true (i.e. there's some "essence" connecting all your experiences together, the continuous existence of your mind is not a mere illusion) and an exact simulation of your mental processes can summon your essence to that time and location (maybe through quantum entanglement?). Basically, true resurrection is possible.
  2. Essentialism is true, but your essence cannot be resummoned after it becomes undetached (i.e. after you die). Recreating your memories and mental processes will only create a copy, nothing more. True resurrection is not possible.
  3. Essentialism is false, what you consider "you" is but a collection of separate conscious experiences which your brain spices up with a false feeling of unity. If this is the case, there is no objective difference between you and a sufficiently precise copy of you.
In both #1 and #3, it can be argued that the future simulation would be you just as much as you are yourself right now. Annoyingly enough, these 3 options are probably physically indistinguishable, meaning we might never know with certainty which one of them is true. 141.134.75.236 (talk) 07:18, 29 October 2014 (UTC)
I would be quite impressed if there was some form of quantum entanglement that could persist over such lengths of time and such massive distances without breaking. I would be wary adding "maybe through quantum entanglement?" to statements like that, as it seems more than a bit far-fetched. - Grant (talk) 07:27, 29 October 2014 (UTC)
Note that the theory of identity espoused here doesn't use that at all - it literally says that sufficiently-good copies of you are also you - David Gerard (talk) 13:52, 29 October 2014 (UTC)
Yep, I was referring to 141's specific use of "maybe through quantum entanglement?" in his/her point above. I'm probably going to hurt my brain if I try to think of the physics implications of such a theory of identity, so I won't bother. - Grant (talk) 15:32, 29 October 2014 (UTC)
Anyone citing quantum entanglement or the uncertainty principle outside the context of actual observation of actual singular particles can safely be ignored. Ikanreed (talk) 15:34, 29 October 2014 (UTC)
Now now, you can cite both of those things in the context of observation of more than one particle at once ;). Jesting aside, that's more or less correct. Entanglement is incredibly interesting, but it has its limits. - Grant (talk) 15:37, 29 October 2014 (UTC)
Hey now, don't dismiss everything a person says just because they add "(maybe through quantum entanglement?)" after explaining a position (#1) that implies a non-local entanglement of separate states. Whether essentialist entanglement of mental states would happen through quantum entanglement or not is besides the point anyway. 141.134.75.236 (talk) 16:27, 29 October 2014 (UTC)
Personally I'm not dismissing what you have to say, but #1 does not imply entanglement of separate states, at least not without a fairly significant jump in logic. What entangled the states in the first place? What physical object do these states represent? How are the states immune to decoherence? While coming up with ideas is neat, quantum entanglement is a very real thing that receives many research hours. Tacking it on to an idea because it sounds like it could possibly relate hurts your case as opposed to helping it. - Grant (talk) 16:34, 29 October 2014 (UTC)
I'm not saying it would have to be quantum entanglement (as I thought I was pretty clear about in the previous post), but #1 clearly supposes that an essence connects all the mental states together. 141.134.75.236 (talk) 16:50, 29 October 2014 (UTC)
You're not saying it has to be, but you mentioned it could perhaps be, which is very far off base. It may seem pedantic that I bring this up, but as Ikanreed mentions, citing physics concepts that don't apply to a given situation make your points look weaker. As someone who has studied quantum mechanics for many years, your point #1 there is indeed weaker with the mention of quantum entanglement. Whether the underlying point is right or not is a philosophical question I won't get into, but the additional physics speculation doesn't help your case. - Grant (talk) 16:57, 29 October 2014 (UTC)
Sheesh, it's just a little personal speculation. It's not like I'm trying to legitimize #1 by adding it there. Sorry if it's a bit non-expert-y of me, but position #1 describes a "mysterious non-local connection"; how am I supposed to not think "Hmm, sounds kinda like quantum entanglement" then? 141.134.75.236 (talk) 17:38, 29 October 2014 (UTC)
Yes, I realize it's personal speculation, and that's why I'm commenting explicitly on that and not on you or your ideas in general. Popular science has a way of distorting physics (especially quantum mechanics) into something it's not, so I do my best to correct some of those distortions where I see them. In this case, for example, quantum entanglement may indeed be nonlocal, but it's not the only thing in quantum mechanics that is. For example, the Aharonov-Bohm effect is nonlocal and has nothing to do with entanglement. There are other examples, but I think that one makes my point. You do have to be somewhat careful when citing real physical phenomena, as this terminology really does mean something. In a similar vein, people often butcher the Heisenberg uncertainty principle by trying to use it in a way that doesn't follow from its actual meaning. - Grant (talk) 17:53, 29 October 2014 (UTC)
I don't really see how the Aharov-Bohm effect is equally applicable here as the phenomenon where 2 things of the same nature react as one through a non-local connection. The non-locality wasn't the only thing where the comparison seemed to hold up. Really, if #1 is true, then the question is whether the "connectedness" happens at the quantum level, at a metaphysical level, or still at some other level. 141.134.75.236 (talk) 18:36, 29 October 2014 (UTC)
The Aharonov-Bohm effect was another example of a nonlocal process, of which there exist more than one. Where else does the comparison seem to hold up? Just the correlations? Correlated and nonlocal isn't enough to say "entanglement." There are additional requirements in place. You're correct that if #1 is correct then the "connectedness" comes from somewhere, but at this point, the question is so broad that saying it could be quantum entanglement seems fishy. Among other things, we still have the situation that creating the initial entanglement requires some local action on the entangled states. - Grant (talk) 18:41, 29 October 2014 (UTC)
Ah, that's a good point. But hypothetically speaking, if it were possible to create entangled states without said local action, would that mean that you wouldn't call it quantum entanglement in that instance? I agree that quantum entanglement in its narrow, practical definition probably wouldn't be applicable in #1, but since I've been talking in purely hypothetical terms from the start, I think it's not unreasonable to use the term "quantum entanglement" in a less narrow sense. 141.134.75.236 (talk) 19:03, 29 October 2014 (UTC)
I'm actually not sure what the result would be if there weren't said local action. At the moment, there's no reason to believe that such a thing is even possible. The trick there is that quantum entanglement isn't confined by the speed of light. It can get away with this because the universal speed limit only applies to information (of which matter, light, etc. can be considered subsets in a way). On the other hand, an operator that acted nonlocally would also involve transmitting information nonlocally, which isn't possible given said universal speed limit. Mostly, the problem with using "quantum entanglement" in the way you're using it is that "quantum entanglement" is a term of art that means a very specific thing. Unless you're planning to invent a second definition, there is no concept of quantum entanglement in a "less narrow sense." It would be like saying that you can apply Maxwell's laws in a less narrow sense; that very statement doesn't work because Maxwell's laws are clearly defined. - Grant (talk) 19:08, 29 October 2014 (UTC)
Perhaps it can't be achieved experimentally, but suppose the universe is vast enough for the exact same event, let's say it's a spontaneous parametric down-conversion, to happen in two different locations by chance. Isn't it conceivable that you wouldn't just get 2 pairs of entangled photons, but 4 photons that are all entangled with each other? 141.134.75.236 (talk) 20:05, 29 October 2014 (UTC)
No. Spontaneous parametric down conversion occurs when a single beam of light enters a second-order non-linear crystal. It's not possible for this to happen in two distinct locations in such a way that both pairs are also entangled with each other. Measuring the pairs may turn out the same results in a few measurements by complete chance, but statistically the two events will not be correlated at all. - Grant (talk) 20:21, 29 October 2014 (UTC)
Just a simple no? So there've been experiments proving this as an impossibility then? How was it established that the two events were exactly the same? Wouldn't it be impossible to know all parameters are equal with 100% certainty? Isn't it impossible to make an expirement where all parameters are precisely controlled? I'm pretty sure it's untestable. 141.134.75.236 (talk) 20:40, 29 October 2014 (UTC)

──────────────────────────────────────────────────────────────────────────────────────────────────── And here's the nihilists' gambit at last. "You can't know anything, and can't prove that my crazy ideas are impossible, that means I win!". Ikanreed (talk) 20:45, 29 October 2014 (UTC)

Since when was it about winning? I was just surprised by such an absolute claim of the impossibility of the hypothesis when I was pretty sure that it's currently untestable. Also, where did I specify which position is my personal belief? Because I didn't. 141.134.75.236 (talk) 20:51, 29 October 2014 (UTC)
But it is testable. Entanglement requires that some initial operation correlate the properties of interest in the two objects. It doesn't matter if they're identical (in fact, all particles with the same properties are indistinguishable and thus identical according to quantum mechanics). It would not be enough for the two beams and the two crystals to have identical properties; the fundamental probabilistic nature of quantum mechanics demands that unless some action forcibly correlates the two events, they cannot and will not be entangled. This is a fundamental fact of quantum mechanics, hence why my answer is a definite "no." - Grant (talk) 22:27, 29 October 2014 (UTC)
To put it a different (and perhaps more accessible way), the randomness inherent in quantum mechanics says that even if both of these events play out exactly the same way, and both sets of crystals/beams are identical, the results of measuring both events will still not be correlated. It comes down to the fact that quantum states effectively represent statistical distributions (e.g. we say the particles have some likelihood to be in some state, but we can't say which state until a measurement is taken). Depending on which direction you start from, this can be taken as an axiom of quantum mechanics, and thus can only be violated if quantum mechanics itself is wrong. The only other way to make this work would be to have an operator that is capable of acting instantly nonlocally, which would violate relativity (specifically the universal speed limit set by the speed of light in vacuum). While there is technically a non-zero chance that the entire theory of relativity and/or the entire theory of quantum mechanics is/are wrong, that chance is infinitesimally close to zero. As such, what you're proposing is infinitesimally close to impossible. - Grant (talk) 22:38, 29 October 2014 (UTC)
Well, I still have a lot of questions, but I'm kinda getting tired of the discussion. I'll take your word for it that according to our current understanding of quantum mechanics, the hypothetical situation I described can't happen. I'm kinda doubtful that the entire theory of quantum mechanics or relativity would necessarily need to be wrong for it to happen, though. 141.134.75.236 (talk) 23:38, 29 October 2014 (UTC)
The parts of quantum mechanics and relativity that would need to be broken to make this work are axiomatic. In other words, they are the basis on which the theories are built. If you destroy the central axioms of a theory, you destroy the theory itself. Whatever would be left afterwards would bear very little resemblance to the theories of quantum mechanics and relativity as they currently exist. The replacement theories would have to be almost entirely new. - Grant (talk) 23:40, 29 October 2014 (UTC)


Nah, I think dismissing it wholesale is pretty legit. Arbitrary conjecture that attempts to overextend the fundamentals of physics to more human-scale mechanics is a way to wrap bullshit claims in the cloth of science, not a genuine way to add rigor. 99.999% of the time, if an idea makes sense, it makes sense without entanglement too. Entanglement isn't fancy, you can see it with a home-based dual-slit experiment, as self-interference is an expression of that concept. It isn't magic, and any attempt to entangle all the states of all the particles of your brain with anything would result in immediate, energetic, messy death. It isn't worth the time to dismiss this case when the general case of "entanglement->crazytalk" is pretty straightforward.Ikanreed (talk) 16:44, 29 October 2014 (UTC)
"... you can see it with a home-based dual-slit experiment, as self-interference is an expression of that concept" - That's actually not correct. Interference (including self-interference) is a property of superposition. Entanglement, on the other hand, is far stronger than a superposition. Producing entangled particles requires that one act on two (or more) particles in such a way that some observable property of all particles becomes correlated in some fashion. In fact, the simplest mathematical examples of entanglements are Bell states, while the most "natural" occurring entanglement could likely be considered subatomic particle decay. - Grant (talk) 18:09, 29 October 2014 (UTC)
You're free to dismiss essentialism as crazytalk, but that doesn't mean you should just ignore anyone who tries to explain the position and in the process brings up a real phenomenon that shows non-local connectedness. I also brought up position #3, which rejects essentialism. Do you support that position then, or do you think that one's bogus too (since I'm the person who stated it?)? 141.134.75.236 (talk) 17:03, 29 October 2014 (UTC)
The best way to defend yourself against this is to dismiss everyone who believes in essentialism as stupid. This would mean in the future the AI would be forced to conclude the only people who would believe in its threat would be people too stupid to create it. King Skeleton (talk) 06:48, 30 October 2014 (UTC)
Actually, resurrectionist essentialists (#1) are probably hardly a relevant demographic in regards to the Basilisk, since they're likely to already have a non-technological religion, meaning they'd consider the idea of a machine being able to perform resurrections as absurd and blasphemous. The ones to look out for are the strong proponents of non-essentialism (#3). They don't believe in God and are the most likely of the three groups to have seriously contemplated varying theories of identity.
Though for the sake of preventing more people being called stupid for believing in a certain theory of identity, there's no reason why the Basilisk couldn't blackmail you by threatening people that aren't (supposedly) you. This might in fact be a much more effective approach. Hardly anyone would be severely troubled by the thought of a clone with your memories suffering in a faraway future, but what about your loved ones? Who would want to condemn the person they love more than anything to endless suffering in a distant future (even if it's actually just a duplicate with their memories)? 141.134.75.236 (talk) 07:39, 30 October 2014 (UTC)
Or it could torture cute animals to blackmail little girls and vegetarians. Or destroy cars to blackmail car enthusiasts etc. Really, the options are endless. 141.134.75.236 (talk) 23:20, 25 November 2014 (UTC)
All this rests on the idea that it is rational to believe mecha-satan will send people to digi-hell unless you give money to something you have no reason to believe will result in the creation of mecha-satan and may, for all you know, be a complete dead end that delays mecha-satan's creation. The key problem with this whole "timeless decision" nonsense is that the Basilisk tries to apply it in the wrong direction; unless we humans can, right now, perfectly predict what steps are needed to create mecha-satan, it cannot blame us for not taking a specific sequence of steps. And it shares the problem of Pascal's Wager in that we have no reason to believe a specific mecha-satan is more likely than any other; what if funding AI research is not what mecha-satan wants and it punishes anyone who does fund it, for example? What if mecha-satan is ultimately defeated by cyber-Jesus and those who served it are subject to eternal torment, while those it tormented are granted everlasting joy? King Skeleton (talk) 07:20, 29 December 2014 (UTC)

[edit] On the xkcd forums

So, today's xkcd is about the AI box experiment and references the basilisk in the alt text by name. There's a thread on the xkcd forums, and Yudkowsky shows up. Just a snippet:

Tl;dr a band of internet trolls that runs or took over RationalWiki made up around 90% of the Roko's Basilisk thing; the RationalWiki lies were repeated by bad Slate reporters who were interested in smearing particular political targets; and you should've been more skeptical when a group of non-mathy Internet trolls claimed that someone else known to be into math believed something that seemed so blatantly wrong to you, and invited you to join in on having a good sneer at them. (Randall Monroe, I am casting a slightly disapproving eye in your direction but I understand you might not have had other info sources. I'd post the link or the text of the link, but I can't seem to do so.)

--ZooGuard (talk) 15:06, 21 November 2014 (UTC)

P.S. Someone took out a chunk of the article.--ZooGuard (talk) 15:09, 21 November 2014 (UTC)

Yeah, he made a claim as to malicious lies on Reddit. I answered quoting the Roko post back to him.
The removed bit is from the Reddit discussion - apparently the TDT paper does not, itself, include this theory that copies of you count as your own self. I'll have to dive into the PDF again - David Gerard (talk) 17:44, 21 November 2014 (UTC)
I should also point out, RW's been mentioned by name on the ExplainXKCD wiki as well.
This idea is often misrepresented as being believed by readers of LessWrong.com since the post was originally placed there and then deleted, and an outside wiki, RationalWiki, represented this as proof that LessWrong readers believed in Roko's Basilisk. Yudkowsky, who also owns LessWrong.com, has written that RationalWiki is deliberately misrepresenting this history. For some of the theory that was (arguably mis-)used to argue for Roko's Basilisk by the original believer, see the Newcomblike decision theories developed on LessWrong.com.
The part regarding RW appears to be largely edited in by Yudkowsky himself, if the edit history's to be believed. Noir LeSable (talk) 17:51, 21 November 2014 (UTC)
If he didn't believe it, he wouldn't have banned discussion of it. It's the logical consequence of his other beliefs in silly god-AIs, so of course he does. King Skeleton (talk) 07:57, 29 December 2014 (UTC)

[edit] BoN asking about refutations

Not sure where to put this.... but I read about this basilisk deal on xkcd and came here to find out what it was. I agree with most of the refutations, but I am not sure about the section "Seed AI and indirect influence" -- assuming the basilisk will be real (and that all other refutations are false), won't it realize that, while it is responsible for most of itself, humans not building it in time really did cause human deaths? Won't it realize that its blackmail saves lives, and is therefore worth doing? In other words, isn't the fact that it largely created itself irrelevant, given that the humans who imagined the basilisk, though they had tiny capacities to help, performed well under capacity, and therefore deserve punishment? EDIT: also, to counter-address a different refutation, wouldn't the basilisk not only punish those who imagined it as it is, but anyone who imagined any utopia-creating AI and refused to donate as much as possible? Therefore, the probability of the hell envisioned by the basilisk theory, while infinitesimal, is perhaps underrepresented. EDIT AGAIN: Furthermore, wouldn't the AI punish those who failed to evangelize theories about it? edit: The previous two edited thoughts don't increase chances of basilisk occurring, but only spread its effects to more people if it indeed occurs, just to clarify. I do not believe in the basilisk, and consider the "don't give in to blackmail" refutation to be the strongest, but I think that the refutations could be stronger if these thoughts were addressed. — Unsigned, by: 68.53.108.81 / talk / contribs 16:05, 22 November 2014‎ (UTC)

Possibly. It's all basically scary philosophical campfire stories, though - David Gerard (talk) 16:50, 22 November 2014 (UTC)
Personal tools
Namespaces

Variants
Actions
Navigation
Community
Tools
support