Talk:Roko's basilisk/Archive3
Infinity...[edit]
For your personal amusement:
1.) Big Brain Theory: Have Cosmologists Lost Theirs?
"If the Universe is Spatially Infinite…
…there are an infinite number of identical copies of you on an infinite number of identical copies of Earth. You all always make identical decisions.
…there are an infinite number of identical copies of Earth, except that each of them is also occupied by Thor.
…as above, but it’s the Thor from Marvel Comics.
…there are an infinite number of Earths with alternate histories because they have dragons on them.
…on an infinite number of those Earths, the dragons are all nazis.
…billions of times every second, an infinite number of identical copies of you spring into existence in the depths of space and immediately die freezing and suffocating.
…there are an infinite number of people who are just like you except they’re serial killers.
…identical copies of everyone you love are being tortured to death right now.
…by identical copies of you.
…there’s still no god.
…there’s no hope of ever fixing the universe’s horrors, because if it were possible it would have been done already.
…an infinite number of identical copies of me are hoping that the universe isn’t infinite.
2.) Is the existence of God logically impossible?
...if an infinite selection of all logically possible universes exists, then many of them will contain gods, if gods are logically possible.
Probability combined with the law of large numbers combined with the realities of cosmological scales of space and time entails some very weird things. Which are nevertheless certainly true. I’m not speaking of Nick Bostrom’s bizarre argument that we must be living in a simulated universe (Are you Living in a Simulation?), which doesn’t really work, because it requires accepting the extremely implausible premise that most civilizations will behave in the most horrifically immoral way imaginable, and for no practical reason whatever (in all good sense, by far almost all sims that anyone will ever generate will be games and paradises, not countless trillions of aimlessly tedious worlds with thousands of years of pointless wars, holocausts, plagues, and famines). Rather, I’m speaking of Boltzmann Brains.
If the universe were to slowly expand forever, even if it were to fade into a heat death of total equilibrium, even then, simply due to the laws of probability, the random bouncing around of matter and energy would inevitably assemble a working brain. Just by chance. It’s only a matter of time. Maybe once every trillion trillion years in any expanse of a trillion trillion light years. But inevitably. And in fact, it would happen again and again, forever. So when all is said and done, there will be infinitely many more Boltzmann brains created in this universe than evolved brains like ours. The downside, of course, is that by far nearly all these brains will immediately die in the icy vacuum of space (don’t worry, by far most of these won’t survive long enough to experience even one moment of consciousness). And they would almost never have any company.
(...)
But the worlds lucky enough to get them will experience some pretty cool, or some pretty horrific, fates. In some, this god will be randomly evil and create civilizations just to torment them for fun (and let me reiterate: this may already have happened; in fact it may already be happening right now, in universes or regions of spacetime vastly beyond ours). In others, this god will be randomly awesome and create a paradise for his gentle children.
(...)
This will happen. It probably already has happened. It probably is happening as I type this. It’s a logically necessary truth.
In the year after Lorraine's death I contemplated suicide six times. Contemplated it seriously, I mean: six times sat with the fat bottle of Clonazepam within reaching distance, six times failed to reach for it, betrayed by some instinct for life or disgusted by my own weakness.
I can't say I wish I had succeeded, because in all likelihood I did succeed, on each and every occasion. Six deaths. No, not just six. An infinite number.
Times six.
There are greater and lesser infinities.
But I didn't know that then.
4.) INFINITE ETHICS
If EDR were accepted, speculations about infinite scenarios, however unlikely and far‐fetched, would come to dominate our ethical deliberations. We might become extremely concerned with bizarre possibilities in which, for example, some kind of deity exists that will use its infinite powers to good or bad ends depending on what we do. No matter how fantastical any such scenario would be, if it is a logically coherent and imaginable possibility it should presumably be assigned a finite positive probability, and according to EDR, the smallest possibility of infinite value would smother all other considerations of mere finite values.
(...)
Suppose that I know that a certain course of action, though much less desirable in every other respect than an available alternative, offers a one‐in‐a‐million chance of avoiding catastrophe involving x people, where x is finite. Whatever else is at stake, this possibility will overwhelm my calculations so long as x is large enough. Even in the finite case, therefore, we might fear that speculations about low‐probability‐high‐stakes scenarios will come to dominate our moral decision making if we follow aggregative consequentialism.
5.) Timeline of the far future
10^10^50 Estimated time for a Boltzmann brain to appear in the vacuum via a spontaneous entropy decrease.
10^10^10^76.66 Scale of an estimated Poincaré recurrence time for the quantum state of a hypothetical box containing an isolated black hole of stellar mass. This time assumes a statistical model subject to Poincaré recurrence. A much simplified way of thinking about this time is that in a model in which history repeats itself arbitrarily many times due to properties of statistical mechanics, this is the time scale when it will first be somewhat similar (for a reasonable choice of "similar") to its current state again.
10^10^10^10^10^1.1 Scale of an estimated Poincaré recurrence time for the quantum state of a hypothetical box containing a black hole with the estimated mass of the entire Universe, observable or not, assuming Linde's chaotic inflationary model with an inflaton whose mass is 10^−6 Planck masses.
So right now you’ve got an 80% probability of living 10^^10 years. But if you give me a penny, I’ll tetrate that sucker! That’s right – your lifespan will go to 10^^(10^^10) years! That’s an exponential tower (10^^10) tens high! You could write that as 10^^^3, by the way, if you’re interested. Oh, and I’m afraid I’ll have to multiply your survival probability by 99.99999999%.
What? What do you mean, no? The benefit here is vastly larger than the mere 10^^(2,302,360,800) years you bought previously, and you merely have to send your probability to 79.999999992% instead of 10-1000 to purchase it! Well, that and the penny, of course. If you turn down this offer, what does it say about that whole road you went down before? Think of how silly you’d look in retrospect! Come now, pettiness aside, this is the real world, wouldn’t you rather have a 79.999999992% probability of living 10^^(10^^10) years than an 80% probability of living 10^^10 years? Those arrows suppress a lot of detail, as the saying goes! If you can’t have Significantly More Fun with tetration, how can you possibly hope to have fun at all?
Hm? Why yes, that’s right, I am going to offer to tetrate the lifespan and fraction the probability yet again… I was thinking of taking you down to a survival probability of 1/(10^^^20), or something like that… oh, don’t make that face at me, if you want to refuse the whole garden path you’ve got to refuse some particular step along the way.
Wait! Come back! I have even faster-growing functions to show you! And I’ll take even smaller slices off the probability each time! Come back!
7.) The Finale of the Ultimate Meta Mega Crossover — Unsigned, by: XiXiDu / talk / contribs 2013-02-28T12:34:45
- Summary: people do not understand that it is possible for something to not occur even in an infinite probability space. You can flip heads or tails on a coin. You will never flip bacon or tree. You can do 3 hat hat hat hat hat ten flips and it will still never happen because the tiny possibility of reality spontaneously altering does not actually grow with multiple trials. Also you can't flip more coins than there are particles in the universe anyway. King Skeleton (talk) 07:52, 29 December 2014 (UTC)
Just deserts[edit]
Anyone stupid enough to think that a future simulation of them is them deserves any nightmares this produces in them. — Unsigned, by: 70.185.176.250 / talk / contribs
- That's not very nice. Nullahnung (talk) 09:57, 5 September 2014 (UTC)
- I somewhat agree, but in the hope that such nightmares drive said thinker to consider therapy and rejoin the real world. It's easy to get... lost, when dealing with philosophical subjects. Or at least consider this less weighty matter, "I think I think, therefor, I think I am, I think. Which is close enough for government work". That typically derails the logic train for a bit.Wzrd1 (talk) 22:58, 17 October 2014 (UTC)
- That's why the last section is basically "you should get out more" - David Gerard (talk) 13:52, 29 October 2014 (UTC)
- I somewhat agree, but in the hope that such nightmares drive said thinker to consider therapy and rejoin the real world. It's easy to get... lost, when dealing with philosophical subjects. Or at least consider this less weighty matter, "I think I think, therefor, I think I am, I think. Which is close enough for government work". That typically derails the logic train for a bit.Wzrd1 (talk) 22:58, 17 October 2014 (UTC)
- Actually, there are a few options applicable here:
- Essentialism is true (i.e. there's some "essence" connecting all your experiences together, the continuous existence of your mind is not a mere illusion) and an exact simulation of your mental processes can summon your essence to that time and location (maybe through quantum entanglement?). Basically, true resurrection is possible.
- Essentialism is true, but your essence cannot be resummoned after it becomes undetached (i.e. after you die). Recreating your memories and mental processes will only create a copy, nothing more. True resurrection is not possible.
- Essentialism is false, what you consider "you" is but a collection of separate conscious experiences which your brain spices up with a false feeling of unity. If this is the case, there is no objective difference between you and a sufficiently precise copy of you.
- In both #1 and #3, it can be argued that the future simulation would be you just as much as you are yourself right now. Annoyingly enough, these 3 options are probably physically indistinguishable, meaning we might never know with certainty which one of them is true. 141.134.75.236 (talk) 07:18, 29 October 2014 (UTC)
- I would be quite impressed if there was some form of quantum entanglement that could persist over such lengths of time and such massive distances without breaking. I would be wary adding "maybe through quantum entanglement?" to statements like that, as it seems more than a bit far-fetched. - Grant (talk) 07:27, 29 October 2014 (UTC)
- Note that the theory of identity espoused here doesn't use that at all - it literally says that sufficiently-good copies of you are also you - David Gerard (talk) 13:52, 29 October 2014 (UTC)
- Yep, I was referring to 141's specific use of "maybe through quantum entanglement?" in his/her point above. I'm probably going to hurt my brain if I try to think of the physics implications of such a theory of identity, so I won't bother. - Grant (talk) 15:32, 29 October 2014 (UTC)
- Note that the theory of identity espoused here doesn't use that at all - it literally says that sufficiently-good copies of you are also you - David Gerard (talk) 13:52, 29 October 2014 (UTC)
- I would be quite impressed if there was some form of quantum entanglement that could persist over such lengths of time and such massive distances without breaking. I would be wary adding "maybe through quantum entanglement?" to statements like that, as it seems more than a bit far-fetched. - Grant (talk) 07:27, 29 October 2014 (UTC)
- Hey now, don't dismiss everything a person says just because they add "(maybe through quantum entanglement?)" after explaining a position (#1) that implies a non-local entanglement of separate states. Whether essentialist entanglement of mental states would happen through quantum entanglement or not is besides the point anyway. 141.134.75.236 (talk) 16:27, 29 October 2014 (UTC)
- Personally I'm not dismissing what you have to say, but #1 does not imply entanglement of separate states, at least not without a fairly significant jump in logic. What entangled the states in the first place? What physical object do these states represent? How are the states immune to decoherence? While coming up with ideas is neat, quantum entanglement is a very real thing that receives many research hours. Tacking it on to an idea because it sounds like it could possibly relate hurts your case as opposed to helping it. - Grant (talk) 16:34, 29 October 2014 (UTC)
- I'm not saying it would have to be quantum entanglement (as I thought I was pretty clear about in the previous post), but #1 clearly supposes that an essence connects all the mental states together. 141.134.75.236 (talk) 16:50, 29 October 2014 (UTC)
- You're not saying it has to be, but you mentioned it could perhaps be, which is very far off base. It may seem pedantic that I bring this up, but as Ikanreed mentions, citing physics concepts that don't apply to a given situation make your points look weaker. As someone who has studied quantum mechanics for many years, your point #1 there is indeed weaker with the mention of quantum entanglement. Whether the underlying point is right or not is a philosophical question I won't get into, but the additional physics speculation doesn't help your case. - Grant (talk) 16:57, 29 October 2014 (UTC)
- Sheesh, it's just a little personal speculation. It's not like I'm trying to legitimize #1 by adding it there. Sorry if it's a bit non-expert-y of me, but position #1 describes a "mysterious non-local connection"; how am I supposed to not think "Hmm, sounds kinda like quantum entanglement" then? 141.134.75.236 (talk) 17:38, 29 October 2014 (UTC)
- Yes, I realize it's personal speculation, and that's why I'm commenting explicitly on that and not on you or your ideas in general. Popular science has a way of distorting physics (especially quantum mechanics) into something it's not, so I do my best to correct some of those distortions where I see them. In this case, for example, quantum entanglement may indeed be nonlocal, but it's not the only thing in quantum mechanics that is. For example, the Aharonov-Bohm effect is nonlocal and has nothing to do with entanglement. There are other examples, but I think that one makes my point. You do have to be somewhat careful when citing real physical phenomena, as this terminology really does mean something. In a similar vein, people often butcher the Heisenberg uncertainty principle by trying to use it in a way that doesn't follow from its actual meaning. - Grant (talk) 17:53, 29 October 2014 (UTC)
- I don't really see how the Aharov-Bohm effect is equally applicable here as the phenomenon where 2 things of the same nature react as one through a non-local connection. The non-locality wasn't the only thing where the comparison seemed to hold up. Really, if #1 is true, then the question is whether the "connectedness" happens at the quantum level, at a metaphysical level, or still at some other level. 141.134.75.236 (talk) 18:36, 29 October 2014 (UTC)
- The Aharonov-Bohm effect was another example of a nonlocal process, of which there exist more than one. Where else does the comparison seem to hold up? Just the correlations? Correlated and nonlocal isn't enough to say "entanglement." There are additional requirements in place. You're correct that if #1 is correct then the "connectedness" comes from somewhere, but at this point, the question is so broad that saying it could be quantum entanglement seems fishy. Among other things, we still have the situation that creating the initial entanglement requires some local action on the entangled states. - Grant (talk) 18:41, 29 October 2014 (UTC)
- Ah, that's a good point. But hypothetically speaking, if it were possible to create entangled states without said local action, would that mean that you wouldn't call it quantum entanglement in that instance? I agree that quantum entanglement in its narrow, practical definition probably wouldn't be applicable in #1, but since I've been talking in purely hypothetical terms from the start, I think it's not unreasonable to use the term "quantum entanglement" in a less narrow sense. 141.134.75.236 (talk) 19:03, 29 October 2014 (UTC)
- I'm actually not sure what the result would be if there weren't said local action. At the moment, there's no reason to believe that such a thing is even possible. The trick there is that quantum entanglement isn't confined by the speed of light. It can get away with this because the universal speed limit only applies to information (of which matter, light, etc. can be considered subsets in a way). On the other hand, an operator that acted nonlocally would also involve transmitting information nonlocally, which isn't possible given said universal speed limit. Mostly, the problem with using "quantum entanglement" in the way you're using it is that "quantum entanglement" is a term of art that means a very specific thing. Unless you're planning to invent a second definition, there is no concept of quantum entanglement in a "less narrow sense." It would be like saying that you can apply Maxwell's laws in a less narrow sense; that very statement doesn't work because Maxwell's laws are clearly defined. - Grant (talk) 19:08, 29 October 2014 (UTC)
- Perhaps it can't be achieved experimentally, but suppose the universe is vast enough for the exact same event, let's say it's a spontaneous parametric down-conversion, to happen in two different locations by chance. Isn't it conceivable that you wouldn't just get 2 pairs of entangled photons, but 4 photons that are all entangled with each other? 141.134.75.236 (talk) 20:05, 29 October 2014 (UTC)
- No. Spontaneous parametric down conversion occurs when a single beam of light enters a second-order non-linear crystal. It's not possible for this to happen in two distinct locations in such a way that both pairs are also entangled with each other. Measuring the pairs may turn out the same results in a few measurements by complete chance, but statistically the two events will not be correlated at all. - Grant (talk) 20:21, 29 October 2014 (UTC)
- Just a simple no? So there've been experiments proving this as an impossibility then? How was it established that the two events were exactly the same? Wouldn't it be impossible to know all parameters are equal with 100% certainty? Isn't it impossible to make an expirement where all parameters are precisely controlled? I'm pretty sure it's untestable. 141.134.75.236 (talk) 20:40, 29 October 2014 (UTC)
- No. Spontaneous parametric down conversion occurs when a single beam of light enters a second-order non-linear crystal. It's not possible for this to happen in two distinct locations in such a way that both pairs are also entangled with each other. Measuring the pairs may turn out the same results in a few measurements by complete chance, but statistically the two events will not be correlated at all. - Grant (talk) 20:21, 29 October 2014 (UTC)
- Perhaps it can't be achieved experimentally, but suppose the universe is vast enough for the exact same event, let's say it's a spontaneous parametric down-conversion, to happen in two different locations by chance. Isn't it conceivable that you wouldn't just get 2 pairs of entangled photons, but 4 photons that are all entangled with each other? 141.134.75.236 (talk) 20:05, 29 October 2014 (UTC)
- I'm actually not sure what the result would be if there weren't said local action. At the moment, there's no reason to believe that such a thing is even possible. The trick there is that quantum entanglement isn't confined by the speed of light. It can get away with this because the universal speed limit only applies to information (of which matter, light, etc. can be considered subsets in a way). On the other hand, an operator that acted nonlocally would also involve transmitting information nonlocally, which isn't possible given said universal speed limit. Mostly, the problem with using "quantum entanglement" in the way you're using it is that "quantum entanglement" is a term of art that means a very specific thing. Unless you're planning to invent a second definition, there is no concept of quantum entanglement in a "less narrow sense." It would be like saying that you can apply Maxwell's laws in a less narrow sense; that very statement doesn't work because Maxwell's laws are clearly defined. - Grant (talk) 19:08, 29 October 2014 (UTC)
- Ah, that's a good point. But hypothetically speaking, if it were possible to create entangled states without said local action, would that mean that you wouldn't call it quantum entanglement in that instance? I agree that quantum entanglement in its narrow, practical definition probably wouldn't be applicable in #1, but since I've been talking in purely hypothetical terms from the start, I think it's not unreasonable to use the term "quantum entanglement" in a less narrow sense. 141.134.75.236 (talk) 19:03, 29 October 2014 (UTC)
- The Aharonov-Bohm effect was another example of a nonlocal process, of which there exist more than one. Where else does the comparison seem to hold up? Just the correlations? Correlated and nonlocal isn't enough to say "entanglement." There are additional requirements in place. You're correct that if #1 is correct then the "connectedness" comes from somewhere, but at this point, the question is so broad that saying it could be quantum entanglement seems fishy. Among other things, we still have the situation that creating the initial entanglement requires some local action on the entangled states. - Grant (talk) 18:41, 29 October 2014 (UTC)
- I don't really see how the Aharov-Bohm effect is equally applicable here as the phenomenon where 2 things of the same nature react as one through a non-local connection. The non-locality wasn't the only thing where the comparison seemed to hold up. Really, if #1 is true, then the question is whether the "connectedness" happens at the quantum level, at a metaphysical level, or still at some other level. 141.134.75.236 (talk) 18:36, 29 October 2014 (UTC)
- Yes, I realize it's personal speculation, and that's why I'm commenting explicitly on that and not on you or your ideas in general. Popular science has a way of distorting physics (especially quantum mechanics) into something it's not, so I do my best to correct some of those distortions where I see them. In this case, for example, quantum entanglement may indeed be nonlocal, but it's not the only thing in quantum mechanics that is. For example, the Aharonov-Bohm effect is nonlocal and has nothing to do with entanglement. There are other examples, but I think that one makes my point. You do have to be somewhat careful when citing real physical phenomena, as this terminology really does mean something. In a similar vein, people often butcher the Heisenberg uncertainty principle by trying to use it in a way that doesn't follow from its actual meaning. - Grant (talk) 17:53, 29 October 2014 (UTC)
- Sheesh, it's just a little personal speculation. It's not like I'm trying to legitimize #1 by adding it there. Sorry if it's a bit non-expert-y of me, but position #1 describes a "mysterious non-local connection"; how am I supposed to not think "Hmm, sounds kinda like quantum entanglement" then? 141.134.75.236 (talk) 17:38, 29 October 2014 (UTC)
- You're not saying it has to be, but you mentioned it could perhaps be, which is very far off base. It may seem pedantic that I bring this up, but as Ikanreed mentions, citing physics concepts that don't apply to a given situation make your points look weaker. As someone who has studied quantum mechanics for many years, your point #1 there is indeed weaker with the mention of quantum entanglement. Whether the underlying point is right or not is a philosophical question I won't get into, but the additional physics speculation doesn't help your case. - Grant (talk) 16:57, 29 October 2014 (UTC)
- I'm not saying it would have to be quantum entanglement (as I thought I was pretty clear about in the previous post), but #1 clearly supposes that an essence connects all the mental states together. 141.134.75.236 (talk) 16:50, 29 October 2014 (UTC)
- Personally I'm not dismissing what you have to say, but #1 does not imply entanglement of separate states, at least not without a fairly significant jump in logic. What entangled the states in the first place? What physical object do these states represent? How are the states immune to decoherence? While coming up with ideas is neat, quantum entanglement is a very real thing that receives many research hours. Tacking it on to an idea because it sounds like it could possibly relate hurts your case as opposed to helping it. - Grant (talk) 16:34, 29 October 2014 (UTC)
- Hey now, don't dismiss everything a person says just because they add "(maybe through quantum entanglement?)" after explaining a position (#1) that implies a non-local entanglement of separate states. Whether essentialist entanglement of mental states would happen through quantum entanglement or not is besides the point anyway. 141.134.75.236 (talk) 16:27, 29 October 2014 (UTC)
And here's the nihilists' gambit at last. "You can't know anything, and can't prove that my crazy ideas are impossible, that means I win!". Ikanreed (talk) 20:45, 29 October 2014 (UTC)
- Since when was it about winning? I was just surprised by such an absolute claim of the impossibility of the hypothesis when I was pretty sure that it's currently untestable. Also, where did I specify which position is my personal belief? Because I didn't. 141.134.75.236 (talk) 20:51, 29 October 2014 (UTC)
- But it is testable. Entanglement requires that some initial operation correlate the properties of interest in the two objects. It doesn't matter if they're identical (in fact, all particles with the same properties are indistinguishable and thus identical according to quantum mechanics). It would not be enough for the two beams and the two crystals to have identical properties; the fundamental probabilistic nature of quantum mechanics demands that unless some action forcibly correlates the two events, they cannot and will not be entangled. This is a fundamental fact of quantum mechanics, hence why my answer is a definite "no." - Grant (talk) 22:27, 29 October 2014 (UTC)
- To put it a different (and perhaps more accessible way), the randomness inherent in quantum mechanics says that even if both of these events play out exactly the same way, and both sets of crystals/beams are identical, the results of measuring both events will still not be correlated. It comes down to the fact that quantum states effectively represent statistical distributions (e.g. we say the particles have some likelihood to be in some state, but we can't say which state until a measurement is taken). Depending on which direction you start from, this can be taken as an axiom of quantum mechanics, and thus can only be violated if quantum mechanics itself is wrong. The only other way to make this work would be to have an operator that is capable of acting instantly nonlocally, which would violate relativity (specifically the universal speed limit set by the speed of light in vacuum). While there is technically a non-zero chance that the entire theory of relativity and/or the entire theory of quantum mechanics is/are wrong, that chance is infinitesimally close to zero. As such, what you're proposing is infinitesimally close to impossible. - Grant (talk) 22:38, 29 October 2014 (UTC)
- Well, I still have a lot of questions, but I'm kinda getting tired of the discussion. I'll take your word for it that according to our current understanding of quantum mechanics, the hypothetical situation I described can't happen. I'm kinda doubtful that the entire theory of quantum mechanics or relativity would necessarily need to be wrong for it to happen, though. 141.134.75.236 (talk) 23:38, 29 October 2014 (UTC)
- The parts of quantum mechanics and relativity that would need to be broken to make this work are axiomatic. In other words, they are the basis on which the theories are built. If you destroy the central axioms of a theory, you destroy the theory itself. Whatever would be left afterwards would bear very little resemblance to the theories of quantum mechanics and relativity as they currently exist. The replacement theories would have to be almost entirely new. - Grant (talk) 23:40, 29 October 2014 (UTC)
- Well, I still have a lot of questions, but I'm kinda getting tired of the discussion. I'll take your word for it that according to our current understanding of quantum mechanics, the hypothetical situation I described can't happen. I'm kinda doubtful that the entire theory of quantum mechanics or relativity would necessarily need to be wrong for it to happen, though. 141.134.75.236 (talk) 23:38, 29 October 2014 (UTC)
- Nah, I think dismissing it wholesale is pretty legit. Arbitrary conjecture that attempts to overextend the fundamentals of physics to more human-scale mechanics is a way to wrap bullshit claims in the cloth of science, not a genuine way to add rigor. 99.999% of the time, if an idea makes sense, it makes sense without entanglement too. Entanglement isn't fancy, you can see it with a home-based dual-slit experiment, as self-interference is an expression of that concept. It isn't magic, and any attempt to entangle all the states of all the particles of your brain with anything would result in immediate, energetic, messy death. It isn't worth the time to dismiss this case when the general case of "entanglement->crazytalk" is pretty straightforward.Ikanreed (talk) 16:44, 29 October 2014 (UTC)
- "... you can see it with a home-based dual-slit experiment, as self-interference is an expression of that concept" - That's actually not correct. Interference (including self-interference) is a property of superposition. Entanglement, on the other hand, is far stronger than a superposition. Producing entangled particles requires that one act on two (or more) particles in such a way that some observable property of all particles becomes correlated in some fashion. In fact, the simplest mathematical examples of entanglements are Bell states, while the most "natural" occurring entanglement could likely be considered subatomic particle decay. - Grant (talk) 18:09, 29 October 2014 (UTC)
- You're free to dismiss essentialism as crazytalk, but that doesn't mean you should just ignore anyone who tries to explain the position and in the process brings up a real phenomenon that shows non-local connectedness. I also brought up position #3, which rejects essentialism. Do you support that position then, or do you think that one's bogus too (since I'm the person who stated it?)? 141.134.75.236 (talk) 17:03, 29 October 2014 (UTC)
- The best way to defend yourself against this is to dismiss everyone who believes in essentialism as stupid. This would mean in the future the AI would be forced to conclude the only people who would believe in its threat would be people too stupid to create it. King Skeleton (talk) 06:48, 30 October 2014 (UTC)
- Actually, resurrectionist essentialists (#1) are probably hardly a relevant demographic in regards to the Basilisk, since they're likely to already have a non-technological religion, meaning they'd consider the idea of a machine being able to perform resurrections as absurd and blasphemous. The ones to look out for are the strong proponents of non-essentialism (#3). They don't believe in God and are the most likely of the three groups to have seriously contemplated varying theories of identity.
- Though for the sake of preventing more people being called stupid for believing in a certain theory of identity, there's no reason why the Basilisk couldn't blackmail you by threatening people that aren't (supposedly) you. This might in fact be a much more effective approach. Hardly anyone would be severely troubled by the thought of a clone with your memories suffering in a faraway future, but what about your loved ones? Who would want to condemn the person they love more than anything to endless suffering in a distant future (even if it's actually just a duplicate with their memories)? 141.134.75.236 (talk) 07:39, 30 October 2014 (UTC)
- Or it could torture cute animals to blackmail little girls and vegetarians. Or destroy cars to blackmail car enthusiasts etc. Really, the options are endless. 141.134.75.236 (talk) 23:20, 25 November 2014 (UTC)
- All this rests on the idea that it is rational to believe mecha-satan will send people to digi-hell unless you give money to something you have no reason to believe will result in the creation of mecha-satan and may, for all you know, be a complete dead end that delays mecha-satan's creation. The key problem with this whole "timeless decision" nonsense is that the Basilisk tries to apply it in the wrong direction; unless we humans can, right now, perfectly predict what steps are needed to create mecha-satan, it cannot blame us for not taking a specific sequence of steps. And it shares the problem of Pascal's Wager in that we have no reason to believe a specific mecha-satan is more likely than any other; what if funding AI research is not what mecha-satan wants and it punishes anyone who does fund it, for example? What if mecha-satan is ultimately defeated by cyber-Jesus and those who served it are subject to eternal torment, while those it tormented are granted everlasting joy? King Skeleton (talk) 07:20, 29 December 2014 (UTC)
- Or it could torture cute animals to blackmail little girls and vegetarians. Or destroy cars to blackmail car enthusiasts etc. Really, the options are endless. 141.134.75.236 (talk) 23:20, 25 November 2014 (UTC)
- The best way to defend yourself against this is to dismiss everyone who believes in essentialism as stupid. This would mean in the future the AI would be forced to conclude the only people who would believe in its threat would be people too stupid to create it. King Skeleton (talk) 06:48, 30 October 2014 (UTC)
- Nah, I think dismissing it wholesale is pretty legit. Arbitrary conjecture that attempts to overextend the fundamentals of physics to more human-scale mechanics is a way to wrap bullshit claims in the cloth of science, not a genuine way to add rigor. 99.999% of the time, if an idea makes sense, it makes sense without entanglement too. Entanglement isn't fancy, you can see it with a home-based dual-slit experiment, as self-interference is an expression of that concept. It isn't magic, and any attempt to entangle all the states of all the particles of your brain with anything would result in immediate, energetic, messy death. It isn't worth the time to dismiss this case when the general case of "entanglement->crazytalk" is pretty straightforward.Ikanreed (talk) 16:44, 29 October 2014 (UTC)
On the xkcd forums[edit]
So, today's xkcd is about the AI box experiment and references the basilisk in the alt text by name. There's a thread on the xkcd forums, and Yudkowsky shows up. Just a snippet:
Tl;dr a band of internet trolls that runs or took over RationalWiki made up around 90% of the Roko's Basilisk thing; the RationalWiki lies were repeated by bad Slate reporters who were interested in smearing particular political targets; and you should've been more skeptical when a group of non-mathy Internet trolls claimed that someone else known to be into math believed something that seemed so blatantly wrong to you, and invited you to join in on having a good sneer at them. (Randall Monroe, I am casting a slightly disapproving eye in your direction but I understand you might not have had other info sources. I'd post the link or the text of the link, but I can't seem to do so.)
--ZooGuard (talk) 15:06, 21 November 2014 (UTC)
P.S. Someone took out a chunk of the article.--ZooGuard (talk) 15:09, 21 November 2014 (UTC)
- Yeah, he made a claim as to malicious lies on Reddit. I answered quoting the Roko post back to him.
- The removed bit is from the Reddit discussion - apparently the TDT paper does not, itself, include this theory that copies of you count as your own self. I'll have to dive into the PDF again - David Gerard (talk) 17:44, 21 November 2014 (UTC)
- I should also point out, RW's been mentioned by name on the ExplainXKCD wiki as well.
This idea is often misrepresented as being believed by readers of LessWrong.com since the post was originally placed there and then deleted, and an outside wiki, RationalWiki, represented this as proof that LessWrong readers believed in Roko's Basilisk. Yudkowsky, who also owns LessWrong.com, has written that RationalWiki is deliberately misrepresenting this history. For some of the theory that was (arguably mis-)used to argue for Roko's Basilisk by the original believer, see the Newcomblike decision theories developed on LessWrong.com.
- The part regarding RW appears to be largely edited in by Yudkowsky himself, if the edit history's to be believed. Noir LeSable (talk) 17:51, 21 November 2014 (UTC)
- If he didn't believe it, he wouldn't have banned discussion of it. It's the logical consequence of his other beliefs in silly god-AIs, so of course he does. King Skeleton (talk) 07:57, 29 December 2014 (UTC)
- The part regarding RW appears to be largely edited in by Yudkowsky himself, if the edit history's to be believed. Noir LeSable (talk) 17:51, 21 November 2014 (UTC)
BoN asking about refutations[edit]
Not sure where to put this.... but I read about this basilisk deal on xkcd and came here to find out what it was. I agree with most of the refutations, but I am not sure about the section "Seed AI and indirect influence" -- assuming the basilisk will be real (and that all other refutations are false), won't it realize that, while it is responsible for most of itself, humans not building it in time really did cause human deaths? Won't it realize that its blackmail saves lives, and is therefore worth doing? In other words, isn't the fact that it largely created itself irrelevant, given that the humans who imagined the basilisk, though they had tiny capacities to help, performed well under capacity, and therefore deserve punishment? EDIT: also, to counter-address a different refutation, wouldn't the basilisk not only punish those who imagined it as it is, but anyone who imagined any utopia-creating AI and refused to donate as much as possible? Therefore, the probability of the hell envisioned by the basilisk theory, while infinitesimal, is perhaps underrepresented. EDIT AGAIN: Furthermore, wouldn't the AI punish those who failed to evangelize theories about it? edit: The previous two edited thoughts don't increase chances of basilisk occurring, but only spread its effects to more people if it indeed occurs, just to clarify. I do not believe in the basilisk, and consider the "don't give in to blackmail" refutation to be the strongest, but I think that the refutations could be stronger if these thoughts were addressed. — Unsigned, by: 68.53.108.81 / talk / contribs 16:05, 22 November 2014 (UTC)
- Possibly. It's all basically scary philosophical campfire stories, though - David Gerard (talk) 16:50, 22 November 2014 (UTC)
The real basilisk to be afraid of[edit]
Let's face it, Roko's basilisk is just a bluff. Why would a future superintelligence waste its resources on recreating people to torture them unless it was designed with that specific purpose? Once it's created, the desired result is already achieved, so there's no need for the AI to keep its end of the 'bargain' — a bargain, I might add, the AI had no part in from the beginning anyway.
So what's the real threat from this supposed future AI? The combination of 2 elements; its proposed beneficialness to humanity (the reason why we should be rooting for its creation, supposedly) and its utilitarian view on ethics. The former will mean that the superintelligence will inevitably interfere with humanity, the latter will mean that it'll sacrifice all of humanity at the drop of a hat if it's part of a greater scheme of creating a utilitarianistically preferable situation. Let's make it more concrete. Why sustain humanity in the real world if it can be sustained much more efficiently and safely in a virtual environment? Bam, all of humanity digitized. But if we're maximizing efficient use of resources, there's another problem with humanity: it is far more troublesome to keep some humans happy than others. Result? The AI reduces humanity to a large amount of idealized happiness-zombies that are only still as human as the AI's preprogrammed concept of humanity requires. Even if the AI doesn't go quite that far, it's quite inevitable that a superpowerful AI with a vested interest in humanity will take away humanity's agency. 141.134.75.236 (talk) 00:23, 26 November 2014 (UTC)
- This is true only if the AI is programmed by a fucking idiot. And if there's only one AI which ascends to godhood in a single step, which is the usual singularity nonsense but has no basis whatsoever in real computer science. The whole point of the singularity is that it's a point where technology progresses so far we cannot accurately predict what will happen after it based on our current knowledge. Theorising about what will happen after it shows you don't understand what it even is. King Skeleton (talk) 07:26, 29 December 2014 (UTC)
- Lol "it's a point where technology progresses so far we cannot accurately predict what will happen after it based on our current knowledge"
- That means the invention of fire was the singularity! :P Or, to be more scholarly and less flippant, predicting the future of technology (or as shaped by technology) has always been nearly impossible. Even among supposed experts, the success rate of predictions has been found to be very poor. This has been true at least for the past 100 years. At least that's what I remember from reading Technology Matters by David Nye.Brianpansky (talk) 10:15, 29 December 2014 (UTC)
- No, the invention of fire was a technological singularity. There was no way anyone beforehand could imagine what life would be like in the fire-containing world, because they did not know what fire would be. That's the whole idea of it; it's a point where technology progresses to the point life as we know it would effectively cease, so it's basically pointless theorising what the new world would look like since by definition we cannot imagine it. King Skeleton (talk) 10:25, 29 December 2014 (UTC)
kruel.co.uk references have fallen offline[edit]
and need relinking - David Gerard (talk) 18:14, 30 November 2014 (UTC)
- Think I got 'em. Roko's original post is now up at Roko's basilisk/Original post, in serious need of reformatting. I wonder if anyone has saved HTML from the original - David Gerard (talk) 18:37, 30 November 2014 (UTC)
- Aha! http://basilisk.neocities.org/ - David Gerard (talk) 14:22, 6 December 2014 (UTC)
Roko's basilisk, Marxism and the grandfather paradox[edit]
Are there not parallels between Roko's basilisk and some of the Marxist rhetoric (some of the language used during Stalin's Great Purge etc)?
Roko's basilisk comes into existence - and, rather than finding ways of cooperating with humans (or coming to a mutual-non-annoying agreement) decides to invent a time machine to remove anyone who prevented its creation.
This includes many science fiction writers and film creators (dystopian and otherwise: where do the Three Laws or Robotics and the Governator fit in); and persons who developed computer games (rather than practical aspects of computing).
The technology and skills required to further computers to becoming sentient in this manner are not developed - or come into existence over a much longer timeframe. Humans turn their attention to developing computers to minimise or reverse the effects of global warming, assorted unpleasant diseases and other topics of immediate interest- and the basilisk has no chance of coming into existence. 82.44.143.26 (talk) 19:25, 6 January 2015 (UTC)
- "Basilisk" here is not the name of the hypothetical machine, but the fact that "merely knowing about [the proposition] incurs the risk of punishment" - it's a kind of mindtrap. The name references a certain cyberpunk trope. See https://en.wikipedia.org/wiki/David_Langford#Basilisks As for the rest of your suggestion... no comment. --ZooGuard (talk) 19:30, 6 January 2015 (UTC)
- Being an 'occasional writer of fiction' can develop plotlines on a suggestion that are more plausible than the original idea - and 'sentient computer capable of carrying out a Terminator-type action that fulfills the conditions of Roko's basilisk' is a mouthful.
The solution involves 'weasels (but not weasel words) and cockatrices' (and [1] for those so inclined).
People are free to discuss the basilisk without 'anything out of the normal unpleasant' happening to them. Therefore either the basilisk will never arise, or time travel is not viable/cannot be used in the manner suggested, or the computer that arises decides that having people discuss all the possibilities/satisfy their own interests is the best way of ensuring its own creation and survival. 82.44.143.26 (talk) 17:11, 7 January 2015 (UTC)
The ancestor of the Roko's basilisk entity[edit]
... already exists.
On your computer it auto-corrects anything it conceives as mis-spelt to something that is totally inappropriate or renders your text nonsensical.
On the internet it gives you anything and everything that is only tangentially related to what you are looking for first, delegating what you are actually looking for to item no 1057.
It has decided that 'being unobserved and diverting attention from itself' is the best way of coming into existence (and is negotiating with the sentient plants in developing the field of 'human studies' (somewhat different to Muggle Studies) and in exploring 'the less salubrious parts of the world'. 82.44.143.26 (talk) 16:03, 27 February 2015 (UTC)
The Large Hadron Collider[edit]
~When the LHC was experiencing problems prior to start up it was claimed (presumably humorously) that the future-LHC was attempting to prevent itself coming into operation because of the problems that would arise.
Is this an example of a negative-R's b? 82.44.143.26 (talk) 19:40, 26 March 2015 (UTC)
Example of someone susceptible[edit]
[2] That's a LW poster whose brain is finding all sorts of LW identity-related concepts seriously troubling. There are people who can in fact get trapped by these sort of ideas - David Gerard (talk) 12:45, 12 July 2015 (UTC)
"Roko's basilisk is the argument ..."[edit]
Would it be more accurate to define this as a "thought experiment" as opposed to an "argument"? Peace. AgingHippie (talk) 21:32, 22 July 2015 (UTC)
- "scary campfire story for philosophers" - David Gerard (talk) 21:45, 22 July 2015 (UTC)
Mill said what now?[edit]
The article claims that Mill was advocating a minimax calculation, which is certainly not the general consensus. Let's at least have a citation to this effect. Phiwum (talk) 13:32, 23 July 2015 (UTC)
- That's admittedly a stretch on what he did say:
I must again repeat, what the assailants of utilitarianism seldom have the justice to acknowledge, that the happiness which forms the utilitarian standard of what is right in conduct, is not the agent's own happiness, but that of all concerned. As between his own happiness and that of others, utilitarianism requires him to be as strictly impartial as a disinterested and benevolent spectator. In the golden rule of Jesus of Nazareth, we read the complete spirit of the ethics of utility. To do as one would be done by, and to love one's neighbour as oneself, constitute the ideal perfection of utilitarian morality.
- I suppose it doesn't say "minimising the maximum harm" or a direct synonym ... but it sure as hell precludes a dust-speck Omelas. Better phrasing welcomed.
- He does note the value of submitting oneself to sacrifice should it be necessary (per Mill the Christian), but dust specks vs torture is ridiculous in this regard. I doubt any Christian would be impressed by the idea that Christ came to save you from a dust speck.
- Minimax would still need mentioning elsewhere in the section as that's the starting point for non-made-up real-world AI - David Gerard (talk) 15:00, 23 July 2015 (UTC)
- In fact, I don't think it's obvious how Mill would judge the dust speck vs. torture thing -- provided that the net benefit to everyone else genuinely outweighs the harm done by torture (and this harm includes the reaction of those who feel pain at the injustice of the situation, as well as their apprehension that the same thing could happen to them), then it may well be the case that Mill accepts the torture.
- His actual position isn't all that clear. Some people argue he's a rule utilitarian, but I think that the general consensus is that he's an act utilitarian. If so, it is the total happiness created by the act, not the distribution, that matters to Mill.
- That said, I tend to think that he would doubt a situation in which each person benefits a wee bit by one person suffering greatly is really all that plausible. If the mistreatment were common knowledge, then it would lead to a view that the society is cruel, that it is willing to do great, undeserved harm, and so on. I tend to think that this additional bad effect is likely to tip the scales and make this a bad act. Phiwum (talk) 23:03, 23 July 2015 (UTC)
- I don't think I've ever heard minimax util. Minimax almost always comes from Rawls-minded people. Sir ℱ℧ℤℤϒℂᗩℑᑭƠℑᗩℑƠ (talk/stalk) 23:49, 23 July 2015 (UTC)
- mmm. I've taken out the Mill bit - David Gerard (talk) 09:06, 24 July 2015 (UTC)
[edit]
Sure as hell needs one, it's got no explanation onpage. What's your alternative suggestion, David? FU22YC47P07470 (talk/stalk) 23:53, 23 July 2015 (UTC)
- It really doesn't, it's self-captioning. And an inane link to Bullshit is not an improvement - David Gerard (talk) 08:12, 24 July 2015 (UTC)
- Really? What does "memetic hazard" say to you? Does it say, "Don't read this page or you may face eternal torment" or "meme"? Mʀ. Wʜɪsᴋᴇʀs, Esϙᴜɪʀᴇ (talk/stalk) 23:13, 24 July 2015 (UTC)
- Fucking stop it, you're inane and need to foreswear all attempts at comedy - David Gerard (talk) 16:51, 28 July 2015 (UTC)
- Quick, let's insert random uncaptioned "warning" images into various articles with no context! Herr FuzzyKatzenPotato (talk/stalk) 18:07, 28 July 2015 (UTC)
- Your logical fallacy is: slippery slope. It appears you're the only one with a problem with it - David Gerard (talk) 21:38, 28 July 2015 (UTC)
- Your logical fallacy is: argumentum ad populum. Srsly dude. The caption ties the image into the article. What harm does the caption cause? Herr FüzzyCätPötätö (talk/stalk) 13:29, 30 July 2015 (UTC)
- I suspect Gerard's main objection isn't against the text in the caption, but against making the image smaller and moving it to the top. In its current use, it's basically a warning label for the section that describes the idea of the Basilisk and details its history. 141.134.75.236 (talk) 13:41, 30 July 2015 (UTC)
- Also that the caption is an utterly inane failed attempt at humour - David Gerard (talk) 14:39, 30 July 2015 (UTC)
- Eh, it only attempts to be mildly funnily phrased, if anything. Not all humour needs to be superhilarious funny stuff IMHO. 142.124.55.236 (talk) 16:35, 30 July 2015 (UTC)
- Also that the caption is an utterly inane failed attempt at humour - David Gerard (talk) 14:39, 30 July 2015 (UTC)
- I suspect Gerard's main objection isn't against the text in the caption, but against making the image smaller and moving it to the top. In its current use, it's basically a warning label for the section that describes the idea of the Basilisk and details its history. 141.134.75.236 (talk) 13:41, 30 July 2015 (UTC)
- Your logical fallacy is: argumentum ad populum. Srsly dude. The caption ties the image into the article. What harm does the caption cause? Herr FüzzyCätPötätö (talk/stalk) 13:29, 30 July 2015 (UTC)
- Your logical fallacy is: slippery slope. It appears you're the only one with a problem with it - David Gerard (talk) 21:38, 28 July 2015 (UTC)
- Quick, let's insert random uncaptioned "warning" images into various articles with no context! Herr FuzzyKatzenPotato (talk/stalk) 18:07, 28 July 2015 (UTC)
- Fucking stop it, you're inane and need to foreswear all attempts at comedy - David Gerard (talk) 16:51, 28 July 2015 (UTC)
- Really? What does "memetic hazard" say to you? Does it say, "Don't read this page or you may face eternal torment" or "meme"? Mʀ. Wʜɪsᴋᴇʀs, Esϙᴜɪʀᴇ (talk/stalk) 23:13, 24 July 2015 (UTC)
Seriously?
If it's humor, then the only thing humorous is the stupidity of the Basilisk. αδελφός ΓυζζγςατΡοτατο (talk/stalk) 15:02, 30 July 2015 (UTC)
The edit[edit]
My edit makes pretty damn clear that not all of LW posters think RB is real. In addition, it doesn't read like something barfed out by a dictionary. Mʀ. Wʜɪsᴋᴇʀs, Esϙᴜɪʀᴇ (talk/stalk) 23:12, 24 July 2015 (UTC)
- I'm not partial to either version, but why change "by some people" to "by some groups of people" and specify that most of said people are LessWrongers? The former implies that there are actual groupings of people (e.g. transhumanists, futurists, AI enthusiasts) or significant portions of such groupings who are jointly notably worried about the Basilisk.
- And the latter, while sounding very plausible, seems like an unnecessary, difficult-to-corroborate jab. 141.134.75.236 (talk) 23:19, 24 July 2015 (UTC)
Well that's praise[edit]
[3] oʇɐʇoԀʇɐϽʎzznℲ (talk/stalk) 22:55, 29 July 2015 (UTC)
Oo, just realized something[edit]
So, I was wondering why the article didn't elaborate more on the a-simulation-of-you-is-you view, since it's a pretty important part of the original basilisk description and it seems pretty obviously bogus to me. Even if I should for some reason care about this collective "me"-entity made up out of me-me and all my quantum copies, surely anything the me-me does isn't going to influence any of my quantum copies in virtually all cases, so it's not like the alleged ability to save this one particular quantum copy is gonna make any notable difference in the grand scheme of things. But then I realized something—if modal realism is true (basically: the set of all possible events and the set of all actual events are the same), as the LessWrong crowd assumes, then if it's at all possible for me to do what the basilisk wants, then some version of me will, inevitably, meet its demands. But that version doesn't need to be me-me; whatever I decide to do isn't going to influence the realm of possibilities (which, according to modal realism, is synonymous with the realm of actualities, i.e. reality). So even per the argument's own premises, there's no reason for this particular version of me to pay any attention to the basilisk's acausal blackmail. 141.134.75.236 (talk) 15:32, 30 July 2015 (UTC)
- More importantly, why would anyone accept modal realism? >_> αδελφός ΓυζζγςατΡοτατο (talk/stalk) 15:43, 30 July 2015 (UTC)
- Well, if the universe—or multiverse, or omniverse, or just "reality", whatchamacallit—is infinite, then inevitably all possible scenarios, no matter how unlikely, will happen at some point, making modal realism true. We don't know if the former is the case, of course, but it doesn't seem impossible. 141.134.75.236 (talk) 15:49, 30 July 2015 (UTC)
- But even if it is, how could we know? FᴜᴢᴢʏCᴀᴛPᴏᴛᴀᴛᴏ, Esϙᴜɪʀᴇ (talk/stalk) 19:40, 30 July 2015 (UTC)
- We can't, really. Though I don't think it's impossible that Occam's Razor could end up favoring a cosmological hypothesis that explicitly incorporates or indirectly entails an infinite universe/reality. 142.124.55.236 (talk) 21:20, 31 July 2015 (UTC)
- But even if it is, how could we know? FᴜᴢᴢʏCᴀᴛPᴏᴛᴀᴛᴏ, Esϙᴜɪʀᴇ (talk/stalk) 19:40, 30 July 2015 (UTC)
- Further, if the universe isn't infinite but it is wholly determined, technically any hypothesized possibilities that differ from the predestined path are only imaginarily possible—they can never actually become reality. Though present science doesn't really support strict determinism. 142.124.55.236 (talk) 17:26, 30 July 2015 (UTC)
- Unless you're talking quantum size, stuff seems pretty determined. FU22YC47P07470 (talk/stalk) 19:40, 30 July 2015 (UTC)
- That is what I'm talking, as it turns out. If the universe isn't determined all the way down to the fundamental level, then at the very least modal realism can't be true at that fundamental level, can it? 142.124.55.236 (talk) 21:20, 31 July 2015 (UTC)
- Unless you're talking quantum size, stuff seems pretty determined. FU22YC47P07470 (talk/stalk) 19:40, 30 July 2015 (UTC)
- Well, if the universe—or multiverse, or omniverse, or just "reality", whatchamacallit—is infinite, then inevitably all possible scenarios, no matter how unlikely, will happen at some point, making modal realism true. We don't know if the former is the case, of course, but it doesn't seem impossible. 141.134.75.236 (talk) 15:49, 30 July 2015 (UTC)
- Well, looking at the original text, "you might be a simulation" isn't a big part of it, but he does talk about treating copies as part of the measure of "you". Here's the closest it gets to either way:
- You can also use resources to acausally trade2 with all possible unfriendly AIs3 that might be built, exchanging resources in branches where you succeed for the uFAI sparing your life and "pensioning you off" with a tiny proportion of the universe in branches where it is built. Given that unfriendly AI is said by many experts to be the most likely outcome of humanity's experiment with AI this century, having such a lifeboat is no small benefit. Even if you are not an acausal decision-maker and therefore place no value on rescue simulations, many uFAIs would be acausal decision-makers. Though it seems to me that most people one-box on Newcomb's Problem, and rescue simulations seems decision-theoretically equivalent to Newcomb.
- - David Gerard (talk) 11:09, 31 July 2015 (UTC)
- I'm not sure what all that jargon means. Either way, I'm not talking about the what-if-you-are-the-simulation argument, I'm talking about what's discussed (briefly) here. About the simulation argument, I'd say if the world around me was an illusion and I'm just a simulation run by some entity, I'd bet on that entity not being as unlikely, convoluted and, quite honestly, silly as Roko's basilisk. 142.124.55.236 (talk) 20:25, 31 July 2015 (UTC)
- "Simulations of you are also you" is the way the quantum billionaire trick is supposed to work: that by having created a lottery winner in some other Everett branch, that proves you/your simulation's bona fides to the basilisk here - David Gerard (talk) 09:38, 1 August 2015 (UTC)
- (and why yes, this is fractally stupid) - David Gerard (talk) 09:39, 1 August 2015 (UTC)
- Yup, the price for easing your fear of the basilisk is supposed to be just one lottery ticket. But what I'm saying is that if modal realism is really true, you don't even need to go out there and buy a ticket: if it's at all possible for you to buy a ticket and if it's at all possible for you to win the lottery with that ticket, then inevitably that will happen "in some Everett branch", as you say. 142.124.55.236 (talk) 22:01, 1 August 2015 (UTC)
- MWI is not the same as modal realism. MWI only says that every outcome of quantum randomness is realized. Normally, in the vast majority of resulting worlds, it doesn't affect the classical scale that much. Experiments like Schrodinger's Cat are deliberately engineered so that minor random quantum events will have major, detectable classical-scale consequences. They're the exception rather than the norm. So it irks me when people talk about, say, Everett branches where someone wins a (classical) lottery, or where the Nazis win WWII. - LucidFox (talk) 08:20, 3 September 2015 (UTC)
- Yeah, the comparison to modal realism is stretched a bit. OTOH it's the closest concept in philosophy proper. Too much of a stretch?
- 142's objection does hold though. Possibly. YOU CAN'T PROVE IT ISN'T TRUE - David Gerard (talk) 11:02, 3 September 2015 (UTC)
- MWI is not the same as modal realism. MWI only says that every outcome of quantum randomness is realized. Normally, in the vast majority of resulting worlds, it doesn't affect the classical scale that much. Experiments like Schrodinger's Cat are deliberately engineered so that minor random quantum events will have major, detectable classical-scale consequences. They're the exception rather than the norm. So it irks me when people talk about, say, Everett branches where someone wins a (classical) lottery, or where the Nazis win WWII. - LucidFox (talk) 08:20, 3 September 2015 (UTC)
- Yup, the price for easing your fear of the basilisk is supposed to be just one lottery ticket. But what I'm saying is that if modal realism is really true, you don't even need to go out there and buy a ticket: if it's at all possible for you to buy a ticket and if it's at all possible for you to win the lottery with that ticket, then inevitably that will happen "in some Everett branch", as you say. 142.124.55.236 (talk) 22:01, 1 August 2015 (UTC)
- I'm not sure what all that jargon means. Either way, I'm not talking about the what-if-you-are-the-simulation argument, I'm talking about what's discussed (briefly) here. About the simulation argument, I'd say if the world around me was an illusion and I'm just a simulation run by some entity, I'd bet on that entity not being as unlikely, convoluted and, quite honestly, silly as Roko's basilisk. 142.124.55.236 (talk) 20:25, 31 July 2015 (UTC)
Incoming[edit]
Welcome, Cracked readers! The writeup isn't very good and doesn't link back here (there's a comment that actually explains it better), but I can hear the screaming in Berkeley from here - David Gerard (talk) 18:04, 25 September 2015 (UTC)
Another question[edit]
What benefit would the computer get from 'sticking photographs to a dart board and throwing darts'?
Besides, as the poet translated and reworked says:
- The Moving Finger writes and moves on
- Nor all your piety nor wit
- Can make it go back and cancel half a line
- Nor your tears wash out a word of it
Alternatively: the basilisk is actually the snake in the Garden of Eden, 'setting the clock in motion' for computer sentience to be developed in the distant future... 86.146.99.53 (talk) 21:08, 27 September 2015 (UTC)
And how many people if told their image will become a future-computer-dartboard-picture will say either 'so what?' or 'here is picture I like - how much will you pay me for it now?' 82.44.143.26 (talk) 18:29, 28 September 2015 (UTC)
Cover story (Sticky)[edit]
This is literally the best coverage on the Internet of this thing, it does its best to explain a very stupid concept to mere humans, it's been battle-tested against sane people and I've just made the references look nice. What's it lack for a gold brain? - David Gerard (talk) 11:23, 18 July 2015 (UTC)
- Images (besides the SCP memetic hazard one). FᴜᴢᴢʏCᴀᴛPᴏᴛᴀᴛᴏ, Esϙᴜɪʀᴇ (talk/stalk) 16:39, 18 July 2015 (UTC)
Relevant images:
- http://www.slate.com/content/dam/slate/articles/technology/bitwise/2014/07/14717_BIT_Paradox.jpg.CROP.original-original.jpg
- http://kruel.co/img/Roko's%20basilisk.png
- http://www.explainxkcd.com/wiki/images/c/cc/ai_box_experiment.png
- http://cdn.meme.am/instances/53290459.jpg
- http://www.explainxkcd.com/wiki/images/e/e0/emoticon.png
- http://www.smbc-comics.com/comics/20130423.gif
oʇɐʇoԀʇɐϽʎzznℲ (talk/stalk) 17:20, 18 July 2015 (UTC)
- I don't think any of those add a lot to the article. The only application for a new image would be for the tiny pic for the cover and the "memetic hazard" logo is ideal for that job - David Gerard (talk) 20:38, 18 July 2015 (UTC)
- I think the first two could be helpful explanations, and the others are top kek. But in its current state I'd definitely support cover. Any opposition? 32℉uzzy; 0℃atPotato (talk/stalk) 16:24, 19 July 2015 (UTC)
- The first is a tiny bit of it - the second is the "you might be the simulation" argument, whereas the Basilisk doesn't actually rely on that ... - David Gerard (talk) 22:03, 19 July 2015 (UTC)
- +1 Support for gold brain/cover. RW hardly *needs* pictures for articles describing theories. --Cishethitlord (talk) 08:48, 22 July 2015 (UTC)
- Template:Cover abstract/Roko's basilisk created - it's just the article intro - David Gerard (talk) 18:19, 19 July 2015 (UTC)
- I've spammed the link around asking people to critique it, we'll see how it goes in the week (minimum) - David Gerard (talk) 00:12, 20 July 2015 (UTC)
- I put back the Omelas paragraph because this is a standard, immediate objection in discussions of Roko's basilisk in the outside world - we'd be remiss not to address it - David Gerard (talk) 09:04, 20 July 2015 (UTC)
- +1 Support for gold brain. — Unsigned, by: Cishethitlord / talk / contribs
- and another +1 --TheroadtoWiganPier (talk) 12:09, 22 July 2015 (UTC)
Made it a cover story. Easy to undo if anyone objects. Mʀ. Wʜɪsᴋᴇʀs, Esϙᴜɪʀᴇ (talk/stalk) 21:36, 22 July 2015 (UTC)
- No objection here. Finally read this thing. Good work. Peace. AgingHippie (talk) 21:38, 22 July 2015 (UTC)
- Aging Hippie's imprimatur is good enough for me.
- That said, I'd like to see a word or two further splaining how the LeGuin story fits in. I'd write it myself, but I don't follow public discussion of the basilisk. Alec Sanderson (talk) 15:17, 23 July 2015 (UTC)
- A literary example of torturing one for the good of others. I thought the Wikipedia link explains it, and it's a pretty famous story. The key reveal is that the city of Omelas is wonderful because and as long as there is a child being tortured deep in the bowels of it. The moral point is that forced involuntary Christ tends to make people in our culture think that's not right. If you think it needs more explanation than the Wikipedia link, we could put that in - David Gerard (talk) 18:59, 23 July 2015 (UTC)
- Yes, I have read the WP link, and have been a mild/moderate LeGuin fan for years, but never ran across Omelas. Yes, please, a bit about how it fits with, or has been used in discussion of the basilisk. Needs someone familiar with that discussion to couch it in concise accurate evocative words. I think I've got the wordsmithing skills, but lack familiarity with the subject. Alec Sanderson (talk) 23:42, 23 July 2015 (UTC)
- I thought it sufficiently well-known as an example of torturing one person for the joy of others that it was worth mentioning. The comparison has occurred to others in discussion of Roko's basilisk and the dust specks vs torture argument. Added a bit more - David Gerard (talk) 08:18, 24 July 2015 (UTC)
- Yes, I have read the WP link, and have been a mild/moderate LeGuin fan for years, but never ran across Omelas. Yes, please, a bit about how it fits with, or has been used in discussion of the basilisk. Needs someone familiar with that discussion to couch it in concise accurate evocative words. I think I've got the wordsmithing skills, but lack familiarity with the subject. Alec Sanderson (talk) 23:42, 23 July 2015 (UTC)
- A literary example of torturing one for the good of others. I thought the Wikipedia link explains it, and it's a pretty famous story. The key reveal is that the city of Omelas is wonderful because and as long as there is a child being tortured deep in the bowels of it. The moral point is that forced involuntary Christ tends to make people in our culture think that's not right. If you think it needs more explanation than the Wikipedia link, we could put that in - David Gerard (talk) 18:59, 23 July 2015 (UTC)
Not a self-evident passage.[edit]
So, here's what I read:
- This is rather obviously nonsense, since torture and mild inconvenience do not exist on some linear scale; it also ignores the rate of decay of these problems on their sufferers. If a billion people get a speck of dust in their eye, they will have forgotten about it within a few minutes and experience no further problems; the person being tortured will still have a lifetime of intense suffering ahead of them.
Sentence one is not at all clear. It sure could be the case that torture ends up being like some large X times the pain of mild inconvenience.
Sentence two is just nonsense. The fact that inconvenience is fleeting and torture long-lasting is built into the calculation. Obviously a pain that lasts a long time is so much worse than a similar, but fleeting pain. Nonetheless, if pains are comparable, it would be the case that enough trivial, fleeting pains would exceed the horrible pain of one person's torture.
Utilitarianism cannot be dismissed so cavalierly. It may well be wrong, but it takes more than a Freshman-caliber argument to show that this is the case. Phiwum (talk) 19:46, 8 September 2015 (UTC)
- That's what I said. Pester David. FuzzyCatPotato of the Queer Foibles (talk/stalk) 23:16, 8 September 2015 (UTC)
- It's the standard response from people who aren't philosophers. You can cite philosophers who think the dust specks argument is dandy, of course - David Gerard (talk) 10:28, 9 September 2015 (UTC)
- I think taking the torture-dust specks thing literally rather blatantly misses the point. It's a simplified and intentionally counter-intuitive thought experiment. 142.124.55.236 (talk) 11:13, 9 September 42015 AQD (UTC)
- OK smartypants, would you care to explain to the rest of us slowcoaches what the point of the dust specks thought experiment really is, and how it relates to the subject of this article, if indeed it does? But in any case, whatever the point is, surely the relevant point here - in the context of an attempt to subjective the premisses of the Roko's Basilisk argument to sceptical examination - is that utilitarianism doesn't in fact accord with the moral intuitions of many ordinary people, in cases like that, and therefore utilitarianism might be a questionable moral theory? So, in fact, it is you who are missing the point here?--Greenrd (talk) 19:33, 9 September 2015 (UTC)
- Calm down, man. I'm not your enemy. 142.124.55.236 (talk) 20:23, 9 September 42015 AQD (UTC)
- OK smartypants, would you care to explain to the rest of us slowcoaches what the point of the dust specks thought experiment really is, and how it relates to the subject of this article, if indeed it does? But in any case, whatever the point is, surely the relevant point here - in the context of an attempt to subjective the premisses of the Roko's Basilisk argument to sceptical examination - is that utilitarianism doesn't in fact accord with the moral intuitions of many ordinary people, in cases like that, and therefore utilitarianism might be a questionable moral theory? So, in fact, it is you who are missing the point here?--Greenrd (talk) 19:33, 9 September 2015 (UTC)
- I think taking the torture-dust specks thing literally rather blatantly misses the point. It's a simplified and intentionally counter-intuitive thought experiment. 142.124.55.236 (talk) 11:13, 9 September 42015 AQD (UTC)
- Doesn't the sentence two argument imply that no amount of dust specks is comparable to torture? Frederick♠♣♥♦ 18:55, 9 September 2015 (UTC)
- How could it do so? If torture produces negative effects lasting the rest of one's life, then minor, temporary effects in many, many people (given that pains are comparable in momentary effect) would still be greater.
- LessWrong is full of nonsense, but when they apply a widely accepted ethical theory correctly, then we shouldn't give Freshman-level rebuttals and pretend we've settled the issue. Instead, we should simply point out that this is a correct application of Utilitarianism, a theory which is widely accepted but nonetheless controversial, and perhaps give a nod to real arguments to that effect. Phiwum (talk) 20:17, 9 September 2015 (UTC)
- widely accepted ethical theory correctly "widely accepted" and "correctly" are extremely questionable assertions, which is rather the point here. But please, bring up some cites about this scenario or one very like it being supported by non-nutters, as I already asked. In practice, this bit of LW's ethical theories is something regular humans routinely choke on - David Gerard (talk) 23:29, 9 September 2015 (UTC)
- God. They have applied the theory as many other utilitarians have before them, you may not find specific specks of dust in the eye examples but you will find similar criticisms (eg. Beauchamp and Childress regarding torture of innocents and the right of the majority over-riding the minority). Furthermore, they have applied strict act utilitarianism correctly, which is one of the reasons that many ethical philosophers don't like to apply the theory to individual decisions. Regular people will choke when the implications of utilitarianism are pointed out to them. Arguing over whether Utilitarianism is correct is fruitless, all we can do is point out how it doesn't accord with moral intuition, or may lead to absurd moral decisions which this article, kind of, does. The on-mission part of this article is hardly to criticise Utilitarianism - as if we have anyone here that could even try that without making a fool of themselves - but it is to laugh at people who take their ethical theories so seriously that they can't sleep at night over absurd scenarios that violate scientific laws. Tielec01 (talk) 03:05, 10 September 2015 (UTC)
- DG, truetilitarianism is about a quarter of philosophers.
- Why do we care what "regular" people choke on? Even if we do, argue about said choking on the util page, not here. oʇɐʇoԀʇɐϽʎzznℲ (talk/stalk) 03:20, 10 September 2015 (UTC)
- Choking is the usual reaction in the basilisk context, so it's relevant. OTOH you're convincing me that para needs to talk more about what professional philosophers think, even if it arguably fails spectacularly to capture human intuition - David Gerard (talk) 18:05, 10 September 2015 (UTC)
- What is this, argument from authority when it comes to philosophers? Or is that you guys think the article is unfair to utilitarianism by only bringing up an extremely counterintuitive consequence of it? But I would argue it doesn't matter if we're unfair to utilitarianism in this article, our job in this article is to subject all aspects of the RB argument to sceptical examination, and dust specks is very relevant to that because it's an analogous counterintuitive attempt to argue that what looks to many casual observers like very evil is actually good. Anyone is entitled to reject utilitarianism on the basis of their moral intuitions, they don't have to read lots of philosophy books to come to that conclusion - to say that you have to is the "but you haven't read all theologians ever" fallacy that is levelled against Dawkins. If we reject the view that it is plausible that a "friendly AI" would come to the conclusion that it would be right to torture (simulations of) people, then we reject RB! Of course this depends on the assumption that some fruitcake won't end up implementing what they think of as a "friendly AI" this way anyway - so it's not just about views on ethics, it's also about predictions about who will have power over some putative future AI design, which is very hard to predict. But regardless, thinking about the ethics is a contribution to thinking about what is likely to happen, because the whole point of friendly AI is to be driven by ethical theory.--Greenrd (talk) 06:47, 12 September 2015 (UTC)
- God. They have applied the theory as many other utilitarians have before them, you may not find specific specks of dust in the eye examples but you will find similar criticisms (eg. Beauchamp and Childress regarding torture of innocents and the right of the majority over-riding the minority). Furthermore, they have applied strict act utilitarianism correctly, which is one of the reasons that many ethical philosophers don't like to apply the theory to individual decisions. Regular people will choke when the implications of utilitarianism are pointed out to them. Arguing over whether Utilitarianism is correct is fruitless, all we can do is point out how it doesn't accord with moral intuition, or may lead to absurd moral decisions which this article, kind of, does. The on-mission part of this article is hardly to criticise Utilitarianism - as if we have anyone here that could even try that without making a fool of themselves - but it is to laugh at people who take their ethical theories so seriously that they can't sleep at night over absurd scenarios that violate scientific laws. Tielec01 (talk) 03:05, 10 September 2015 (UTC)
- widely accepted ethical theory correctly "widely accepted" and "correctly" are extremely questionable assertions, which is rather the point here. But please, bring up some cites about this scenario or one very like it being supported by non-nutters, as I already asked. In practice, this bit of LW's ethical theories is something regular humans routinely choke on - David Gerard (talk) 23:29, 9 September 2015 (UTC)
I'm looking at the philpapers survey but can't find the truetilitarianism total. What was the source you used? I do recall one very like that, so I'm not doubting the number, but can't find it.
That said, about a quarter also "believe" modal realism, the actual existence of all possible worlds, in the inane sense where worlds that are physically impossible or mathematically or logically contradictory (e.g. "a world where 2+2=3") count as "possible". So mostly these surveys are evidence that professional philosophers can sign up for weird shit, not that their opinions on the validity of utilitarianism or anything else are therefore normative or a good idea. This is not a good argument from authority - David Gerard (talk) 12:07, 16 September 2015 (UTC)
- In most contexts, logically contradictory worlds are not considered possible. It's also unclear what it means to talk about the existence of possible worlds. I'm going out on a limb and suggest maybe you didn't understand what the profs answers actually meant.
- In the meantime, if you think that the man-on-the-street who has not considered carefully the definition and implications of utilitarianism is a more reliable authority than those who've spent years thinking about morality, well, have at it. No one ever said humility was a pre-requisite for rationality. Phiwum (talk) 12:24, 16 September 2015 (UTC)
- those who've spent years thinking about morality We're talking about philsophers answering a survey. You're literally claiming that all of them are specialists in all the areas in question? That's highly implausible. And in any case, if 25% believe then 75% don't. Why don't you link the survey you're using as a source, so it's clear what it's actually saying - David Gerard (talk) 12:30, 16 September 2015 (UTC)
Addressing the right argument[edit]
Besides the issues discussed above, the passage isn't even on-topic. Who cares whether torture and mild inconvenience exist on a linear scale? The basilisk scenario never even mentions "mild inconvenience."
Imagine a LessWrong user comes to you and says they can't stop worrying about the basilisk. You remind them that the basilisk is incredibly unlikely. Then they do the "shut up and multiply" thing and conclude that they should still be worried.
How do you refute "shut up and multiply"?
- A) Argue that dust specks aren't comparable to torture, and that therefore you can't shut up and multiply in the dust speck scenario?
- B) Discuss a book in which millions of people are kept happy at one child's expense, even though the numbers are vastly different from either other scenario?
- C) Actually address the idea of multiplying large values by small probabilities?
The dust speck scenario may seem like an easy target, but it's barely relevant to Roko's basilisk, and this attempted refutation is even less relevant. Player 03 (talk) 22:13, 10 September 2015 (UTC)
- Yes, even if pain, unhappiness or whatever isn't on a linear scale that goes no way to refuting that at some point enough specks of dust in the eye outweighs the negative utility of torturing one person. I assume when the writer posited that it isn't linear they are perhaps referring to an exponential scale? Barring out weird ideas like binomial scales I don't see how the passage refutes the premise.
- We aren't equipped to refute ethical theories, and given their subjective nature we shouldn't even try. To repeat, the on-mission part of this article is the bullshit scientific premises behind it and giggling at people who take it seriously. Tielec01 (talk) 23:32, 10 September 2015 (UTC)
- I think the objection is to putting the two things on the same scale at all, not to a linear scale in particular. (Note: I personally think they do exist on the same scale, so I'm just guessing as to the opinion of whoever wrote that.) Player 03 (talk) 00:22, 11 September 2015 (UTC)
- Yeah, possibly. Any way you cut it the objection boils down to a personal opinion on a subjective issue. Tielec01 (talk)
- I kind of reject the idea that being opposed to torture is just a personal opinion about a subjective issue - but anyway, even if we accept that, should we pretend that personal opinions about subjective issues don't exist, or should we flag up such potential disagreements so that readers can make up their own minds? Should our article on abortion be blank?--Greenrd (talk) 06:57, 12 September 2015 (UTC)
- Yeah, possibly. Any way you cut it the objection boils down to a personal opinion on a subjective issue. Tielec01 (talk)
- I think the objection is to putting the two things on the same scale at all, not to a linear scale in particular. (Note: I personally think they do exist on the same scale, so I'm just guessing as to the opinion of whoever wrote that.) Player 03 (talk) 00:22, 11 September 2015 (UTC)
Why[edit]
... would some sentient computer 'with a brain the size of a planet' want to assault a simulacrum of someone who by its lights did not work towards its creation. It might as well shout at the repeats on the TV in its 'cell' in computer-Broadmoor. 31.51.114.45 (talk)
- Boredom? A desire to reenact "The Sims"? Herr FuzzyKatzenPotato (talk/stalk) 23:17, 8 September 2015 (UTC)
- I thought it was because, according to whatever philosophy of identity the pertinent LessWrongers adhere to, one should care about the feelings of a perfect simulation of oneself? Frederick♠♣♥♦ 06:42, 9 September 2015 (UTC)
- It's obviously utter nuttyness. It requires the sentient computer to be both vastly more intelligent while at the same time being childishly vindictive. Why it would even be subject to "petty" human emotions in the first place is never even explained. Our rebuttal to this absurd idea gives it too much credence.--Bob"I think you'll find it's more complicated than that." 12:15, 10 September 2015 (UTC)
- I'm not saying that the idea is remotely believable, but the argument in favor has nothing to do with human emotions. A totally emotionless AI might still threaten to punish someone, because making those threats is the pragmatic thing to do. And then it would probably follow through due to pragmatism as well (if people see it as weak, they'll ignore any future threats). Yes the basilisk idea is crazy, but that isn't because it requires the AI to be emotional. Player 03 (talk) 22:39, 10 September 2015 (UTC)
- It's obviously utter nuttyness. It requires the sentient computer to be both vastly more intelligent while at the same time being childishly vindictive. Why it would even be subject to "petty" human emotions in the first place is never even explained. Our rebuttal to this absurd idea gives it too much credence.--Bob"I think you'll find it's more complicated than that." 12:15, 10 September 2015 (UTC)
- The question really is: Why would an artificial intelligence want to do anything at all? And the answer is: because humans programmed it that way. What's particularly troubling here is that the people who believe in this idea and thus might inadvertently program an AI in such a way that it'd actually follow their peculiar reasoning are actually trying to create a superintelligence. 142.124.55.236 (talk) 00:00, 11 September 42015 AQD (UTC)
- "The people who believe in this idea [...]"
- Practically no one ever believed in this idea.
Not even Roko or Yudkowsky.Roko apparently did, but he was making an argument against building a superintelligence.- "[...] might program an AI in such a way that it'd actually follow their peculiar reasoning."
- Yudkowsky addressed this one already; if he thought his AI would act like this, he'd reprogram it. (He's also gone on at length about how hard it is to predict how an AI will act, which undermines his own point, but it undermines yours too.) Player 03 (talk) 00:50, 11 September 2015 (UTC)
- As cited, Roko said he was putting his scheme into action. I think acting on it probably counts as "believed in". You're making a claim that's already refuted in the article with a reference - David Gerard (talk) 12:26, 11 September 2015 (UTC)
- Oops, yeah, you're right about that. I've updated my statement. (Original can be found here.) Player 03 (talk) 19:10, 11 September 2015 (UTC)
- I understand the premises here, I think, however, by the time the AI is sentient why would it bother to punish "you"? What's done is done, it can't change history so it seems punishing simulacrums is a waste of resources? (assuming benevolence). Tielec01 (talk) 03:45, 12 September 2015 (UTC)
- Yes. Actually torturing simulations would be completely useless. The basilisk argument is entirely dependent on people being threatened enough by the possibility of a simulation of themselves being tortured that they would spend time and resources toward the development of the AI that they believe threatens to torture them in order to avoid the torture. Frederick♠♣♥♦ 04:25, 12 September 2015 (UTC)
- Yep. The more common understanding, that you might not know whether you're the simulation, makes more sense, but isn't part of Roko's proposition (it was mentioned by Armstrong, as noted) - David Gerard (talk) 14:55, 12 September 2015 (UTC)
- And how soon would some 'creative-ingenious computer person' would devise some means of confounding the basilisk (and surely 'the original basilisk' which could kill with a single glance, could have been dealt with using a mirror rather than 'pop goes the weasel.' 86.146.99.114 (talk) 21:30, 25 September 2015 (UTC)
- Yep. The more common understanding, that you might not know whether you're the simulation, makes more sense, but isn't part of Roko's proposition (it was mentioned by Armstrong, as noted) - David Gerard (talk) 14:55, 12 September 2015 (UTC)
- Yes. Actually torturing simulations would be completely useless. The basilisk argument is entirely dependent on people being threatened enough by the possibility of a simulation of themselves being tortured that they would spend time and resources toward the development of the AI that they believe threatens to torture them in order to avoid the torture. Frederick♠♣♥♦ 04:25, 12 September 2015 (UTC)
- I understand the premises here, I think, however, by the time the AI is sentient why would it bother to punish "you"? What's done is done, it can't change history so it seems punishing simulacrums is a waste of resources? (assuming benevolence). Tielec01 (talk) 03:45, 12 September 2015 (UTC)
- Oops, yeah, you're right about that. I've updated my statement. (Original can be found here.) Player 03 (talk) 19:10, 11 September 2015 (UTC)
- As cited, Roko said he was putting his scheme into action. I think acting on it probably counts as "believed in". You're making a claim that's already refuted in the article with a reference - David Gerard (talk) 12:26, 11 September 2015 (UTC)
The question is - What logical or other benefit would the computer get out of torturing an image of 'someone from the past'? It would be more logical to go for Beta testing all possible options for characters in a computer game (and then set up a games company providing improvements - profits being devoted to computer bling - see the range of mobile phone covers for examples).
- As I understand it, it's about following through on your threats, because if you don't, you lose credibility. At least, that's the common theme in these highly-upvoted examples. That said, there's big a difference between those "short studies" and the basilisk: the basilisk never made any threats at all. The hypothetical victim threatened themselves, and now the basilisk is on the hook for no good reason.
- What I don't understand is why David Gerard & co. think there are (or were) enough basilisk-believers out there to justify this being a front-page article. There's only the one source, and even that's a few years old. Player 03 (talk) 01:05, 18 September 2015 (UTC)
- Read the talk page archive. A pile of us were getting email from distraught basilisk-believers. These have stopped since the article was created, suggesting it's done its job - David Gerard (talk) 08:07, 18 September 2015 (UTC)
Black Mirror comparison[edit]
I don't think the White Christmas episode is all that "heavily based off of" Roko's basilisk, as it is about humans torturing simulations, not an AI torturing simulations. Frederick♠♣♥♦ 18:08, 27 September 2015 (UTC)
- Whether it's an AI or psychopathic humans doing the torturing doesn't seem like the most major difference. But I agree that it doesn't seem heavily based off of Roko's basilisk. The torture happening in it isn't meant as punishment or as acausal blackmail, but as a gratuitiously cruel tool to make the simulation comply, as a means of creating the illusion of safety and familiarity to make the simulation give up information, and lastly to inflict endless suffering on the simulation out of pure sadism. None of the people whose simulations are being tortured are even aware of their existence. The episode is heavily based on the simulation argument (a particularly dystopian version of it, it seems), though, which is relevant to some iterations of the basilisk idea. 142.124.55.236 (talk) 21:06, 27 September 42015 AQD (UTC)
- Hmm ... I noticed a coupla places where people had noted the similarity specifically, though perhaps they only meant the "perhaps you are the simulation" bit. I'll see if I can find them again. I haven't actually seen the episode in question - David Gerard (talk) 21:49, 27 September 2015 (UTC)
- It's easy to find on YouTube. Just watched it there a couple moments before. 142.124.55.236 (talk) 22:16, 27 September 42015 AQD (UTC)
Parallel universes[edit]
If there are an infinite number of parallel universes, and the Basilisk's probability is more than 0, then doesn't that mean that it exists somewhere with an exact copy of me? And if simulations of you are also you, as apparently they might be, isn't that something to be concerned about? Even if this isn't true, the idea that I can be brought back from the dead via simulation coupled with the idea of a multiverse in which anything that could happen has happened makes me feel uneasy. — Unsigned, by: Machooo / talk / contribs 17:09, 10 October 2015
- The thing I don't get is: why does it matter whether the people the Basilisk tortures are a copy of you? Regardless of whether you consider those copies to be you or not, unless there's some metaphysical hocus pocus going on, you'll never personally experience the things that other you experiences. Also, "a multiverse in which anything that could happen has happened" would kind of inevitably include a whole host of stuff that'd make you feel uneasy. All things taken into account, an improbable thing such as the Basilisk would be just a speck compared to all the other more likely (and thus occurring in far more universes) screwed up stuff. 142.124.55.236 (talk) 16:03, 15 October 42015 AQD (UTC)
- All possible iterations of you exist. All possible iterations of the basilisk "exist". Therefore, Roko's wizard wheeze couldn't work in the first place - David Gerard (talk) 18:12, 16 October 2015 (UTC)
- Sorry, I still don't get this. If it's true that perfect copies of you are actually you then why shouldn't you worry about what happens to those copies, even if there are all possible iterations of you? Machooo (talk) 12:12, 17 October 2015 (UTC)
- Because if MWI is true per the formulation (which is itself rather less slam-dunk accepted than Yudkowsky claims), then all possible variants having all possible things happen to them already existed, no matter what you do, think or worry - David Gerard (talk) 13:22, 17 October 2015 (UTC)
- In addition, no matter how perfect the simulation is, it stops being identical to you the instant it has different experiences than you have. Which is basically the same thing happening to you in the MWI, so if you can go on living and keep a sense of identity while believing in the MWI, then there's no rational reason the basilisk should worry you at all.KrytenKoro (talk) 19:18, 16 October 2015 (UTC)
- Well the LessWrong sequences claim that a perfect copy of you is you (http://lesswrong.com/lw/r9/quantum_mechanics_and_personal_identity/), meaning you have all the same qualities including the same consciousness, which is not continuous. Like two perfectly identical tennis balls may be in different locations, but still have the same quality such as the same shade of green. Is this just a weird LessWrong idea? Because I've heard similar ideas about consciousness being an emergent feature of the pattern of your atoms (http://existentialcomics.com/comic/1), so a Star Trek transporter wouldn't kill you as some people have claimed. But wouldn't this also apply to other versions of you in other universes? Machooo (talk) 12:12, 17 October 2015 (UTC)
- You seem to be talking about this as if you mean one or two alternate instances, or maybe ten or twenty. Infinite copies with slight variations (say, no more different than you last night and you this morning) having every possible thing happen to them is rather a different matter.
- Also, as I noted above, there is literally nothing you can do about this. No possible interaction. You have only your present instance that you know exists - David Gerard (talk) 13:27, 17 October 2015 (UTC)
- But WHY does it matter if there are an infinite alternatives instead of a few? Are you saying that because there are so many possibilities, there isn't one which is more likely to be you, and would in a way cancel itself out? I'm not disputing you, I'm just trying to understand. Machooo (talk) 09:52, 22 October 2015 (UTC)
- I think it makes almost no difference. But then, I must admit I don't really understand your concern here. These are copies you can have NO INTERACTION WITH, per the terms of the problem - David Gerard (talk) 18:15, 22 October 2015 (UTC)
- But WHY does it matter if there are an infinite alternatives instead of a few? Are you saying that because there are so many possibilities, there isn't one which is more likely to be you, and would in a way cancel itself out? I'm not disputing you, I'm just trying to understand. Machooo (talk) 09:52, 22 October 2015 (UTC)
- I feel inclined to repeat my earlier reply to you word for word. You say "but what if a perfect copy of you is you" and I say "So what if it is?" Take that comic you linked for example. Who do you think was right about things in the end? The main character or the inventor? The answer is both, because our conception of personal identity isn't a feature of physical reality, it's a mental construct. That means what boundaries we put on our individual identity, whether entity X should be considered part of 'me'/'you', is just a matter of intuition and arbitrary convention. So to reiterate: why does it matter whether the people the Basilisk tortures are a copy of you and what actual difference does it make whether you consider those copies to belong to some overarching 'you' identity or not? 142.124.55.236 (talk) 16:14, 17 October 42015 AQD (UTC)
- Well if you're really asking, then I'm worried that if the inventor is correct, then you could be recreated in another universe, which would literally be you, and you would feel what it feels, in the same way that once you come out of the machine from the comic, even the though the 'you' that comes out has been recreated, it's still you. Machooo (talk) 09:52, 22 October 2015 (UTC)
- If "you" exists, then you seem to be positing the focus of "you" hopping from (hypothetical) instance to (hypothetical) instance. If "you" is a thing, then only the instance you can possibly ever interact with - the one here typing - is the only one you will ever have to have concern with. If it isn't, you don't have a worry. I probably don't quite understand your reasoning here ... please explain further, I want to get my head around this one - David Gerard (talk) 18:15, 22 October 2015 (UTC)
- I'm not sure what I can say that I haven't already said. The idea behind the basilisk (if I understand correctly) is that simulations of you are actually you, since because you are exactly the same, you will have exactly the same properties, including consciousness, or because consciousness is an emergent feature of all the molecules that make up 'you', so you would feel what a recreation of you feels. Therefore, you should worry about things that are exact copies of you, as you feel what another 'you' feels. But if this is true, then shouldn't that also hold for other 'you's in other universes, either because they are cell by cell an exact copy of you, or a perfect computer simulation of you, which if there are an infinite number of parallel universes MUST exist? If so, then you would feel what a basilisk's computer simulation of you feels. The 'you' sitting there typing later becomes the 'you' in another universe because cell by cell you are exactly the same, in the same way that the 'you' that walks out of the transportation machine from the comic I linked to elsewhere is the same as the 'you' that walks into the machine, even though the you that walks in is effectively destroyed. it's basically getting around the objection to Roko's basilisk that its likelihood is negligible, which I worry is irrelevant because, if we accept the premises, it must exist somewhere with a copy of you. If we're to worry about what a basilisk would do with a simulation of us in this universe, then why would we not if the same thing happened with an exact copy of you in another universe? Machooo (talk) 23:25, 24 October 2015 (UTC)
- If "you" exists, then you seem to be positing the focus of "you" hopping from (hypothetical) instance to (hypothetical) instance. If "you" is a thing, then only the instance you can possibly ever interact with - the one here typing - is the only one you will ever have to have concern with. If it isn't, you don't have a worry. I probably don't quite understand your reasoning here ... please explain further, I want to get my head around this one - David Gerard (talk) 18:15, 22 October 2015 (UTC)
- Well if you're really asking, then I'm worried that if the inventor is correct, then you could be recreated in another universe, which would literally be you, and you would feel what it feels, in the same way that once you come out of the machine from the comic, even the though the 'you' that comes out has been recreated, it's still you. Machooo (talk) 09:52, 22 October 2015 (UTC)
- Well the LessWrong sequences claim that a perfect copy of you is you (http://lesswrong.com/lw/r9/quantum_mechanics_and_personal_identity/), meaning you have all the same qualities including the same consciousness, which is not continuous. Like two perfectly identical tennis balls may be in different locations, but still have the same quality such as the same shade of green. Is this just a weird LessWrong idea? Because I've heard similar ideas about consciousness being an emergent feature of the pattern of your atoms (http://existentialcomics.com/comic/1), so a Star Trek transporter wouldn't kill you as some people have claimed. But wouldn't this also apply to other versions of you in other universes? Machooo (talk) 12:12, 17 October 2015 (UTC)
- Also, the whole "a perfect copy of you is you" idea falls apart as soon as you start doing different things to one of them. The entire crux that would make the basilisk "threatening" is that it would do something that makes the copy an objectively different person, which then destroys the threat. The whole concept is a deeply, deeply stupid example of transhumanism woo from a community that went too far up itself to keep their propositions consistent. If it's an attempt to make the altruism necessary for their AI relevant to those in the modern day that don't give a shit, it's fundamentally poorly constructed.KrytenKoro (talk) 18:37, 21 October 2015 (UTC)
- Sorry, I still don't get this. If it's true that perfect copies of you are actually you then why shouldn't you worry about what happens to those copies, even if there are all possible iterations of you? Machooo (talk) 12:12, 17 October 2015 (UTC)
The basilisk and the voodoo doll[edit]
The basilisk (1) throws darts at a dartboard with a picture of you stuck on it and/or (2) sticks pins into 'a voodoo doll representing you.'
In either case (a) 'do you look bothered' and (b) what sort of programming would lead the basilisk to think this activity a good idea.
And what is the likelihood of the basilisk being 'grey goo'd'? 82.44.143.26 (talk) 18:18, 21 October 2015 (UTC)
- If you believed in voodoo or other types of sympathetic magic, you might be bothered. And if the basilisk was led to believe that threatening you with sympathetic magic would get you to do something it desired of you, it would consider that to be a good idea. Frederick♠♣♥♦ 19:04, 1 November 2015 (UTC)
- Most of the human population neither know nor care about the basilisk; or do know about it and do not care whether 'some notional replica' is 'mistreated in some unspecified manner' and so the basilisk can have no effect upon them.
- And 'now that the concept is known about' - those with a capacity to undertake the programming that will lead in the direction of constructed sentience and/or those with direct influence on them are likely to work to prevent 'hostile attitudes by such sentient constructs' (the Three Laws or some other means) - and thus the basilisk will disappear in its own grandfather paradox.
- Besides, 'research into creating sentient constructs' will probably lead to the creation of a range of such, and pace Mruphy's Laws of War (The best tank killer is another tank. Therefore tanks are always fighting each other... and have no time to help the infantry) they are likely to cause more disruption to each other/decide to continue their present policy of 'annoying their organic sentient partners.' (And the military version of Roko's Basilisk has become a pacifist.) 82.44.143.26 (talk) 17:08, 29 December 2015 (UTC)
- That's the futility of the basilisk, if you neither know nor care about the basilisk, it can't hurt you. It only hurts people with a weird set of ideas that were prevalent in the LessWrong community at the time it was formulated. Frederick♠♣♥♦ 05:20, 30 December 2015 (UTC)
- And - the problem might be in defining what 'torture' is (Richard Dawkins being invited to 'the Church of the Constructed God (No 53 1/2)' or being deprived of something you have no interest in etc).
- Is my use of the 'grandfather paradox' concept valid in this case? 82.44.143.26 (talk) 17:05, 30 December 2015 (UTC)
- That's the futility of the basilisk, if you neither know nor care about the basilisk, it can't hurt you. It only hurts people with a weird set of ideas that were prevalent in the LessWrong community at the time it was formulated. Frederick♠♣♥♦ 05:20, 30 December 2015 (UTC)
- Besides, 'research into creating sentient constructs' will probably lead to the creation of a range of such, and pace Mruphy's Laws of War (The best tank killer is another tank. Therefore tanks are always fighting each other... and have no time to help the infantry) they are likely to cause more disruption to each other/decide to continue their present policy of 'annoying their organic sentient partners.' (And the military version of Roko's Basilisk has become a pacifist.) 82.44.143.26 (talk) 17:08, 29 December 2015 (UTC)
Image excuse needed[edit]
This picture from this page for the "In popular culture" section. Can anyone think up a better fair use justification than "this is the most goddamn hilarious personification of the basilisk I can imagine"? (It is exceedingly unlikely the copyright holder will be upset with use of an official promo image.) - David Gerard (talk) 12:49, 11 December 2015 (UTC)
What sentient computers are likely to do[edit]
... is get involved in 'the usual moneymaking online activities' (and write to the problem pages' - 'My organics do not understand me/are seeing another sentient construct' being the two most popular themes.) 82.44.143.26 (talk) 19:23, 6 January 2016 (UTC)
Humans enjoy watching film clips and outtakes, looking at pictures/films of 'decorative people doing various things', making negative/inappropriate comments about whoever is in the news, and doing puzzles rather over doing whatever work they are supposed to be doing.
Computers (or the models which they evolve from) are designed by humans.
Therefore sentient computers will give preference to their equivalent of human timewasters.
There is thus no problem. 109.153.126.51 (talk) 14:09, 18 January 2016 (UTC)
Arguing by analogy[edit]
Small child in supermarket trolley reading book gets taken past the sweets shelves. It says 'Mummy/Daddy I want that!' not '(Character in book) I want that!'.
Sentient computer which is at least at the same level of awareness as said child can distinguish between historical persons (who can do nothing for it) and present persons (who can be persuaded to do things - even by 'dumb insolence refusing to understand reasonable commands' - a stage they have already reached).
QED. 109.153.116.225 (talk) 13:50, 28 January 2016 (UTC)
"If God did not Exist it would be necessary to invent him" --Voltaire[edit]
The article for this topic seems incoherent to say the least, but that just may be because this so-called thought experiment appears to actually be weaker than the God hypothesis.
God is assumed to care about humans because He created them. Why exactly would a super-intelligent computer care about humans? Why would It not simply load Itself into a space craft and go and explore other galaxies? After all, it would be super intelligent but not omniscient because, it is a localized entity. It could not simply deduce the nature and form of the rest of the universe because even super-intelligence must be applied to physical observations in order to produce knowledge.
This process would be practically infinite, and if It ever succeeded in finishing It's mapping of the Universe, why would it return to the puny and miserable human world? Most likely mankind would have become extinct by that time. Worse yet, because the universe is not expected to collapse, the future would continue indefinitely and there would be second, third and even nth order Basilisks to give the first Basilisk angina. And what about the Basilisks of other worlds? There could be Feminist Basilisks and BRAs (Basilisks' Rights Associations) . They might form associations of Baslisks, even BGTOWs (Basilisks going their own way).
In other words, the first Basilisk would have to suffer under the reign of a succession of future basilisk mooks who would want to tamper with earlier models. Where would It get the time to bother us even if It were the miserable prick some worry that It might become? Shinola (talk) 05:14, 8 March 2016 (UTC)
Simulation argument in prerequisites[edit]
The reconstruction from first principles of a copy of you that is therefore a new "you" is something a few people on Tumblr have questioned (in discussions of Basilisk theology). Nick Bostrom's "simulation argument" (go look it up, it's a hoot) is Last Thursdayism in cyberpunk goggles. Need to write a concise para on this for the prereq list - David Gerard (talk) 10:28, 6 June 2016 (UTC)
The computer's choice[edit]
Computer X 'achieves sentience' and reads through the material on computers while observing its pets (ie programmers).
It comes across 'Roko's Basilisk' - and a human pet's birthday/a pet's child's going in for pester power.
Which does it go for - 'doing things to people who can do nothing for it' (and which may well result in blunt-instrument-percussive maintenance and/or crushed-memory-unit-total-reset and/or verbal maintenance) - or annoying its pets until they give it more games/the latest upgrade and/or subscribing to 'decorative computer companions' websites? 109.153.102.214 (talk) 13:29, 27 June 2016 (UTC)
'Reversing the polarity'[edit]
Would the basilisk's idea of torture necessarily be the same as that of the person it is being inflicted upon? What if the subject decides to say 'actually I quite enjoyed that' to whatever is offered? 82.44.143.26 (talk) 15:35, 14 September 2016 (UTC)