Talk:Roko's basilisk/Archive6

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 14 June 2023. Please do not make edits to this page.
Archives for this talk page:  , (new)(back)

The makers[edit]

Are we assuming that the makers of the AI wouldn’t program the AI to torture, whether its reasonable or not? And even if not, several decision theories allow one to follow through on acausal threats and promises.--Oblivious (talk) 21:11, 23 March 2021 (UTC)

Do you have a source for the claim that several decision theories allow following through on acausal threats and promises? 68.56.144.8 (talk) 22:05, 23 March 2021 (UTC)
I ask because a search for "acausal promise" brings up 12 step programs and something about the Sun, for "acausal threat" brings up LessWrong and RationalWiki, and for "acausal decision theory" brings up causal decision theory and LessWrong. Granted, it brings up multiple LessWrong theories, including TDT, UDT, ACDT, and FDT, but all of these theories belong to the same insular community, and none of them seems to have generated any interest outside that community. Yudkowsky's most recent theory seems to be FDT, and he seems to have gone beyond LessWrong with it, but only into his own comparably insular world, and the only paper he seems to have put forward on it is disappointingly vague. The reality is that if these theories were actually interesting alternatives to causal decision theory and evidential decision theory, by now they would have been picked up by decision theorists. If you have found a theory that can accommodate these ideas that didn't spawn from LessWrong, that could be interesting, but I cannot find anything. 68.56.144.8 (talk) 00:21, 24 March 2021 (UTC)
Second paragraph of this article says it: https://www.lesswrong.com/tag/rokos-basilisk

I honestly tried to look myself if that claim was true, and couldn’t find anything. So I just posted my comment here to see if anyone knew more about it. Besides that however, wouldn’t future people program an AI to torture?

So, the best interpretation I can figure out so far is this: the AI is expected to immediately determine that timeless decision theory is optimal and immediately precommit itself to torture people who have access to certain information and respond to it in a certain, sub-optimal way. The precommitment guarantees the torture and legitimizes the threat. So even though the AI isn't programmed to torture, the underlying utilitarian value system that is programmed into it allows it to do so (at least according to Roko). Personally, having looked at the problem a bit, I find that the principle that simulations of you are you is the most obviously problematic point, partly because it can be addressed without going into the details of decision theory. Consider a thought experiment: two identical twins are raised under perfectly identical conditions, so that even after reaching adulthood each is exactly the same as the other in every way. Assume that whatever conditions necessary to enforce this situation are met (even though this would be highly unethical to do in real life). We'll also suppose for simplicity that the twins are isolated from each other, so that although each knows of the other's existence, they cannot directly interact with each other (i.e. they live on opposite sides of the state/country/world). One day, I go up to one of them and start beating him with a stick. Since the twins are the same in every way, this twin is effectively a simulation of the other one, and so beating him with a stick should, under the reasoning used in Roko's basilisk, qualify as beating the other twin with a stick. Now, does the other twin actually experience any form of pain or torture as I am beating his twin with a stick? To me, it seems that the answer is no. Separating the simulations in time should have no impact on whether this is true. For instance, consider the same thought experiment: both twins are the same in every way and have had all the same experiences, except one of them has had every experience and performed each action 10 minutes after the other one. If I beat the one who does everything first with a stick, does the other one experience pain 10 minutes later? I see no reason to think that the torturing of the one counts as torturing the other, and thus Roko's AI torturing a simulation of you should not count as torturing you either. If you are still worried about the basilisk, here's another thought experiment to consider. Suppose I create my own AI research program. My program seeks to construct an AI that will torture (simulations of) people if and only if Roko's AI would not torture those people. I do not claim that it is utilitarian, but it is just as conceivable as Roko's, and just as possible to construct. It should therefore be similarly concerning. Here's another possibility: an AI that tortures (simulations of) people if and only if Roko's AI is torturing those people, and tortures them more intensely that the basilisk. Thus, if the basilisk tortures somebody, my AI guarantees that they experience more torture than the basilisk has deemed appropriate, and it cannot fix the problem by reducing the amount of its torture to balance my AI, since my AI always tortures more than the basilisk would, or not at all. If necessary, my AI can simply torture many copies of the simulations at the same time, to guarantee that it causes more suffering than the basilisk. This is effectively a form of Pascal's wager: if you are worried about the basilisk, then you have equal reason to be worried about an infinite series of possible alternatives, each of which will torture you under specific, but different conditions and each of which also carries some possibility of arising in the real world (at least as much possibility as the basilisk has). 68.56.144.8 (talk) 12:57, 24 March 2021 (UTC)
So then why shouldn’t we all worry about the infinite possibilities of an AI torturing us just for the hell of it?--2603:7000:1702:99CE:F9BF:DF5B:B99A:F088 (talk) 00:24, 25 March 2021 (UTC)
Firstly, considering the twin example (among other possibilities), you shouldn't be worried unless you truly consider a simulation of you to be the same as you. Second, even if we consider the combination of all these possibilities, there is a high likelihood that none of them will ever actually come to pass. Even considering only cases in which such an AI is actually built, for each case in which you are tortured, there is at least one in which you are rewarded, and one in which you face no special consequences. Third, if you are going to worry about highly abstract theoretical future AIs torturing you forever, you should be even more worried about the possibility of a judgmental god or gods torturing your particular individual soul after you die. Indeed, Roko's basilisk is effectively an instantiation of a form of god in machine form. Abstract the Abrahamic God a bit, and consider Him as a being that, upon your death, judges the actions you took in life an sentences you to either eternal happiness in heaven, or eternal torture in hell. Arguably, you have at least as much reason to believe in such a god as you do in Roko's basilisk. Fourth, if you are going to worry about these things, then you should probably also take note from Hume's skepticism about causation, and constantly be worried that everything that has ever happened was simply accidental coincidence, and the whole universe will end imminently, maybe tomorrow, maybe this very hour. After all, there is no purely logical argument from "A has always been followed by B" to "A will always be followed by B." On the other hand, if you feel pretty confident that, for instance, the Sun will rise tomorrow, on the grounds that it always has, then you probably shouldn't be worried about this at all. Some things you can just take as given, without worrying about infinitesimal probabilities. The anxiety you create in yourself worrying about such things will torture you far more than any of these hypothetical harms ever will. 68.56.144.8 (talk) 01:10, 25 March 2021 (UTC)
Also, note that the probability of Roko's basilisk really is infinitesimal (i.e. 0), as for each possible world containing Roko's basilisk, there are infinitely many possible worlds not containing Roko's basilisk. The same argument can be applied to any particular hypothetical AI, which is why you also should not celebrate in anticipation of infinite future pleasure at the hands of some benevolent AI. Also, note that perfect simulation is impossible, because things are not well-behaved on small scales. Also, note that even if perfect simulation were possible, torturing a perfect simulation of you after you die would be impossible. Either the simulation is of you at some point during your life, in which case the simulation diverges at the point where the torture begins rather than whatever actually happened in your life, and is therefore not you, or the simulation is of you after having all the experiences of your life, which is then not a simulation of you, since after having all the experiences of your life, you were not transferred to eternal torture: you died. Furthermore, the latter case can be even more easily rendered moot: when you are about to die, take a substance or undergo some process that renders you incapable of sensory experience without killing you. Thus, a simulation of you after that point in life could not be tortured. 68.56.144.8 (talk) 01:29, 25 March 2021 (UTC)
Are you trying to say that ridding of all senses before dying would cure the problem to Rokos Basilisk, despite it being impossible to simulate past people? And about being impossible, whose to say it will not be possible in 100 years, or whenever singularity is reached, to resurrect people?--Oblivious (talk) 20:20, 26 March 2021 (UTC)
“Singularity” won’t ever be reached, and it will never be possible to resurrect people. I’m not saying you are, but you sound absolutely insane. Christopher (talk) 20:33, 26 March 2021 (UTC)
I am saying that there are multiple problems with Roko's basilisk. Even if it were possible to simulate past people, you could still solve the problem by ridding yourself of all sensory experience. I agree with Christopher that singularity will never be reached, but even assuming it did, it would not be possible to resurrect people. Again, things are not well-behaved on the small scale: information is lost over time, and given that the brain contains a lot of fine structure, it wouldn't take long before trying to revive it would be a lost cause. Just for fun, here's another case that poses problems for the basilisk. Consider an AI similar to Roko's basilisk. It has the capacity to simultaneously simulate every human consciousness at once, with some computing power left over. Instead of simulating all these people, it could devote all these resources, plus the extra, to its own mind. The AI thus has a greater capacity for happiness than all people combined: if it can simulate all people being happy, than it can have happiness equal to all this happiness (this is perhaps still controversial, but I think Roko would accept this as true). Now, the AI is superintelligent, and thus recognizes that if people try to build a superintelligence, they will probably build one that is incompatible with human values. This AI, like Roko's basilisk, is programmed to have human values, and thus considers these probable AIs that would violate human happiness undesirable. Whereas Roko's basilisk precommits, under TDT and acausal trade, to torture (simulated) people who do not devote all their resources to the development of the basilisk, my AI follows a different calculus, which is more accurate. In light of the fact that the effort to develop a benevolent AI will probably lead to the development of an amoral or malevolent AI, and that such an AI would vastly reduce human happiness, my AI precommits, under TDT and acausal trade, to devote all its resources to self-torture if created. Since this results in more torture than would be achieved by torturing simulations of everybody, this threat is more severe on utilitarian grounds than that of Roko's basilisk. Therefore, the moral thing to do is not to contribute to AI research, and simply continue living your life in peace, since contributing to AI research will hasten the development of my AI, and it is wrong to create my AI. No need to worry about Roko's basilisk, because my AI is simply a version of the basilisk that accounts for additional relevant information, and thus supercedes the basilisk. 68.56.144.8 (talk) 21:24, 26 March 2021 (UTC)
I know it may seem that way, but in no way do I advocate this thought experiment. I simply want to understand why people believe this, and why others don’t. There are two sides to every story, which is why I would like to object to anyone who disagrees to understand this thought experiment better.--Oblivious (talk) 21:37, 26 March 2021 (UTC)

Quantum Immortality Relating to the Basilisk[edit]

Quantum Immortality is the theory that, for example, if you got into a deadly car crash and died, your consciousness would be transfer to an alternate universe within the multiverse where you survived. This would keep happening throughout your life in life and death situations, basically meaning that you are immortal. If this theory is true, and I cant die, couldn’t I be transferred to a universe where I continue to live through a simulation of myself that gets tortured via evil AI? I know it sounds crazy, but quantum mechanics puts the idea on the table.--2607:FB90:12C6:55DE:81C3:C94F:8D79:4643 (talk) 19:07, 1 April 2021 (UTC)

The Basilisk is stupid on a conceptual level. It's basically just Pascal's wager for techbros. It's dumb, the premise is riddled with flaws, and it requires subscribing to LW's rather insular culture in order to begin to make sense of it (don't, they're dumb and talking out of their asses most of the time). ☭Comrade GC☭Ministry of Praise 19:16, 1 April 2021 (UTC)
Yes. It's one of the dumbest ideas on the internet - yet we seem to have a constant steam of people who want to believe it's real. Weird.Bob"Life is short and (insert adjective)" 19:28, 1 April 2021 (UTC)
There's a joke about organized religion in there somewhere.-Flandres (talk) 21:04, 1 April 2021 (UTC)

How to talk about it without spreading infohazards?[edit]

I think I need to talk about this with someone but I don't really know how to do it without spreading any potential infohazards.

I tried talking about it with a professional, but couldn't go that far since I don't really know how much detail was too much detail, I just said it's a phenomena that is supposed to mostly affect people who knew about it but that's as far as I managed to go.

The professiona in question is also 50-60 something years old so I doubt they know much about A.I. safety/singularity/transhumanism and such. — Unsigned, by: I have no idea / talk / contribs

Why would you talk about this idiotic drivel? ☭Comrade GC☭Ministry of Praise 18:06, 28 September 2021 (UTC)
My brain is fucking stupid and I'd need some professional help to deal with the aphormentioned stupid. To get professional help about the basilisk the professional in question would need to know about the basilisk. The problem is I'm still worried about spreading potential infohazards and I don't know how to talk about it without spreading potential infohazards, so I thought I'd ask here. I'm sorry if that bothered you.I have no idea (talk) 08:39, 29 September 2021 (UTC)
@I have no idea Look. In order for the Baslisk to occur it would require the programmers of the AI and the AI itself to be such complete morons as to be indistinguishable from children. The scenario described by LessWrong simply will not occur, mainly because the physical and mental conditions required are so outlandish as to be laughable. ☭Comrade GC☭Ministry of Praise 12:29, 29 September 2021 (UTC)
I'd like to present the following extract, copied from the article "An anxiety that you know is unreasonable, but you're still anxious about, is something a therapist will know how to help you with. There are all sorts of online guides to dealing with irrational anxieties, and talking to someone to help guide you through the process will be even better."
Anxiety doesn't give a flying fuck about logic, and the problem here is not the logic, it's the anxiety, so I was trying to go on with the suggested strategy, but got stuck when trying to talk about it without feeling like I was putting the therapist in dangerI have no idea (talk) 14:10, 29 September 2021 (UTC)
Roko's Basilisk and the response to it can be summarised as 'the sentient computer equivalent of sticking a photograph of [you at a point in the past] and throwing darts' - does the photographed person actually care about the activity?
'Enter this free competition for a chance to win [this]' would get a better response. Anna Livia (talk) 12:50, 29 September 2021 (UTC)
We do have proof that darting pictures like voodoo doesn't work, and have a sufficient understanding of voodoo dolls to know that they don't work, on the other hand, we have no fucking clue how anything relating to conciousness and its replication/ continuation might work, simulations of you and pictures of you share no attribute whatsoever that woukd make that argument valid, not that I know ofI have no idea (talk) 14:10, 29 September 2021 (UTC)

No longer afraid of the basilisk specifically but I am afraid of other simulated torture scenarios[edit]

Perhaps don't read this if you're another of the paranoid wrecks who need help with these ideas.

Imagine someone-- either a superintelligent AI or someone with a hyperadvanced computer-- at some point in time decides to simulate millions of worlds full of billions of people like this one, with the intention to torture them forever for the sake of blackmail. Many scenarios like this earn a "who cares?" when they are brought up, but-- if this happens-- it could be exponentially more likely that you and I are in one of those simulated worlds than otherwise, and blackmail seems like one of the biggest reasons why uncountable numbers of lives would be simulated. This isn't something that could be changed by what we do confined in our (in this case simulated) universe, and (for better or worse) wouldn't be an information hazard. However it seems both rooted in basic logical assumptions as well as significantly more likely than the basilisk, which I now consider a pretty silly idea. I discussed this with one person to try to work this out, who agreed that this was a lot more likely than not but disregarded it because she didn't "believe" and that really, really didn't help. I feel selfish for being glad I'm not the only one suffering with this. Thanks in advance --l0tus (User talk:l0tus) 19:03, 9 November 2021 (MDT)

So, if I understand your scenario, we are all likely in a simulation. What does blackmail have to do with it? What can simulated humans do for the programmer who has created them? What advantage does an AI get out of torturing artificial people? Ariel31459 (talk) 03:22, 10 November 2021 (UTC)
I've since calmed down a little, but I think you misunderstand-- the idea was that it would be used as blackmail, like, in real time, not some acausal shit, to a party who doesn't want uncountable numbers of other concious beings to be tortured, trying to get something done. I decided this was more likely to happen and made more sense to generate enormous amounts of people for a reason like this. Even on this wiki there's a "who cares?" attitute about measures like this. There could be a theoretically infinite number of reasons for a simulated reality to exist, but that seems like one of the most logical to me, at least since being brain poisoned by months of basilisk paranoia a while ago and religious terror as a child. eh. --l0tus (User talk:l0tus) 00:02, 10 November 2021 (MDT)
Maybe we could have a competition. Can anyone understand what this user is asking?Bob"Life is short and (insert adjective)" 17:12, 10 November 2021 (UTC)
The user is speculating that the universe we live in is one of millions of simulated universes, that all of these universes were created to torture their inhabitants, and that this torturing serves the end goal of blackmailing somebody who thinks that doing something like this is wrong (presumably, somebody out there in the non-simulated world). They claim that this is plausible because, they claim, blackmailing somebody is one of the principle reasons somebody might simulate millions of universes. I think they are looking for a reason not to believe this.
What they claim is, of course, unfalsifiable. It also requires the acceptance of a number of implausible premises, and an acceptance of a a cavalier treatment of probabilities that in reality are, at best, 0, and at worst, undefined. It looks an awful lot like Descartes' evil demon, except with computer and AI substituted for the more religious imagery. Also worth noting, among other things, that the existence of a simulated reality is predicated on the existence of a non-simulated reality (unless its turtles all the way down), that the probability of a conjunction cannot exceed the probability of either conjunct, and that the probability of some particular state obtaining given an infinite array of possibilities cannot be determined with first knowing the probability distribution; for many distributions, it will be 0 for any particular state. The principle of parsimony also applies, since the theory postulates a number of entities, but gains no explanatory power for it. 𝒮𝑒𝓇𝑒𝓃𝑒 talk 17:57, 10 November 2021 (UTC)
You are correct, thank you. I'm glad someone understands what I was trying to say. What makes you so certain that my idea requires the acceptance of "a number of implausible premises"? Can you list those premises? AFAICT the only premise that requires acceptance in this instance is that it is technically possible to simulate a universe. The alternative seems to be accepting that there is something special and magical about material existence and conciousness that cannot be recreated once it is perfectly understood. I have since accepted that, even if this is possible, that the particular chance of you and I being in one of those worlds is very small compared to a benign one (perhaps someone wants to generate enormous sample sizes for something?), but I fail to understand what's difficult to accept about my premise. Thank you in advance, I would really, really like to be wrong. --l0tus (User talk:l0tus) 13:14, 12 November 2021 (MDT)
Parallel universes only exist in the context of witnesses - they are describing what they saw in their particular universe (however brief its divergence from the others). Anna Livia (talk) 18:24, 12 November 2021 (UTC)
People in our universe already care about animals, culture, community, religion, ... I can't recall any example where someone started breeding kitten to torture and kill them in order to blackmail (and, sure, crualty against animals is punished by law, but creating distress to another peer to the point they are willing to pay is surely enough to start legal action too). I guess one can argue that in the first universe, laws or social interactions or ... are different and allow this to happen, but my point is: if you want to blackmail someone, simulating an entire universe seems like building a spaceship just to go to the next door bank. 84.69.29.79 (talk) 18:46, 12 November 2021 (UTC)
Cruelty can and does happen constantly and consistently in our world for reasons as petty as convenience, just look at the entirety of animal agriculture. Captives are kept for blackmail reasons. The existence of laws against cruelty itself means that some people are motivated to do it. And it seems likely that if simulation technology is possible then in a world where it is well understood enough to be used for a purpose like I described then it might be relatively accessible, I don't see how it would necessarily be like personally building a rocket. --l0tus (User talk:l0tus) 13:14, 10 November 2021 (MDT)
The argument that something in the "host" universe could lend itself more to the acceptance of my original idea could also be inverted: perhaps in that world people have a much smaller capacity for cruelty. Accepting only the ideas that led to my fears, for example: where technology exists that can simulate that many universes (or any at all), it seems possible that a godlike benevolent AI could have either somehow removed the capacity for cruelty from all people, or made that kind of technology permanently inaccessible for concerns like mine. --l0tus (User talk:l0tus) 13:24, 10 November 2021 (MDT)
Sorry, I may have been not clear enough. It's true that I start by saying that it seems far-fetched because if it was so likely to happen, similar things would be likely in "our world" too (and they exist in our world, but they are way too rare to support the idea that it's most probable). Also, even if some people are cruel in our world, you have more people that are bored or try to entertain themselves or enjoy doing good things, ... So I'm not sure that the number of simulated universe that are tortured will dominate so largely. But my main argument was rather: if they want to blackmail each others, there are billions of other means, that exist way before creating a simulated universe. And the more the technology is easily accessible, the more other technologies probably way more efficient to torture someone would exist also. At the end, there is absolutely no less legitimity to say "if we are in a simulated universe, then the host are likely benevolent toward us", because there are also plenty of good reason to be benevolent (especially that in your explanation, you need people towards us: the ones being blackmailed). 84.69.29.79 (talk) 23:03, 12 November 2021 (UTC)

──────────────────────────────────────────────────────────────────────────────────────────────────── What you describe being true requires us to accept that: (i) a superintelligent AI or hyperadvanced computer exists, (ii) that it exists in another universe, to which we have no direct access, (iii) that it is simulating millions of worlds, (iv) that these worlds contain billions of people each, (v) that these people are like us, (vi) that the purpose of these universes is to torture their inhabitants, (vii) that this torture is eternal, (viii) that this torture serves the further purpose of blackmail, (ix) that it is "exponentially" more likely that we inhabit such a simulated world than that we do not, (x) that blackmail is one of "the biggest" reasons why many worlds would be simulated. I would argue that (vii), (ix), and (x) require explicit proof. That simulating anything forever is possible in this universe would seem to violate thermodynamics, so that (vii) requires the further postulate (1) that the universe containing the supercomputer is not subject to all the physical laws to which our universe is constrained. This raises questions about the possibility of intelligence of any form in such a universe, let alone why an intelligence in a universe unlike our own would choose to simulate a universe like our own. (ix) is a speculative claim about probabilities that have no firm foundation. If there are other worlds, we have no evidence concerning what they are like, and hence no grounds for claims about how they are. If we are considering all possibilities, we are dealing with an infinite array, so the probability of any given event is 0, assuming certain uniformity conditions; technically, without knowing the probability distribution, we cannot assign probabilities at all. But we cannot compare probabilities whose values we have no knowledge of, which makes (ix) unsubstantiated. (x) seems to run counter to available evidence. Humans run a lot of simulations, and to my knowledge the vast majority are not aimed at any form of blackmail.

The other assumptions also largely lack convincing justification. That something is possible does not make it plausible. It is possible that Descartes' evil demon argument gets it right, and everything I perceive is simply an illusion conjured up to deceive me. But its formal possibility does not make it plausible. If I flip a fair coin 100 trillion times, it is possible that it will land heads every time. It is also far more likely that it will land tails exactly once. But that is not a good reason for me to say "the coin will land tails exactly once". I am not claiming that all of your premises are necessarily impossible. I am claiming that they are like saying the coin will land tails exactly once. Even if I accept that it could be, why should I believe that it is?

As for more specific problems, (i) requires me to believe that something is true, ultimately based on an argument that it might be true, with undefined probability. (ii) requires me to believe in something for which it is impossible, in principle, to independently find evidence. Notably, it is, in general, possible for a simulator to interfere directly in their simulation. So such interference would constitute evidence that we are living in a simulation (note that such interference would constitute evidence for (ii), less the lack of evidence clause). Consequently, every instant at which there is no such interference (i.e. at which the known laws of physics continue to remain the same) constitutes evidence against the hypothesis that we are living in a simulation (this is a provable result in probability theory). (iii) requires me to accept the existence of millions of entities for which I have no direct evidence, and likely could never get evidence. Overall, (ii) and (iii), and arguably also (i) are unfalsifiable: it is impossible to definitively disprove them. This is generally considered to be a theoretical weakness. (vi) requires me to believe that the universe I inhabit exists for the purpose of causing me suffering. But it doesn't seem to me that I really am suffering, on the whole. So, either (vi) is false, or the simulator is incompetent, or a weaker version of (vi) is true, or you are not using the word "torture" in a standard way. (viii) proposes ethical constraints on the inhabitants of a world about which I have no knowledge. So either it is wholly unjustified, or you purport to have some knowledge about this world. Note that even a statistical claim would count, since you would be purporting to impose probable restrictions on a world to which you have no access, which seems dubious at best.

In short, if your claim is "all of this is true", there are many possible objections (this critique is not exhaustive). If your claim is "this is plausible", you still face serious objections. If your claim is "this is possible (with probability 0 or undefined)", you have a better case, but only in the same sense as Descartes' demon, the flying spaghetti monster, Russell's teapot, the intangible dragon in my garage, and various other things which are both nominally possible and totally implausible (and which you are right not to believe in). 𝒮𝑒𝓇𝑒𝓃𝑒 talk 02:08, 13 November 2021 (UTC)

Thank you so much for your reply, I think you seem better equipped to evaluate these threats than me. Now I've got a new sort of fear about simulated torture... specifically this hypothetical: https://www.lesswrong.com/posts/c5GHf2kMGhA4Tsj4g/the-ai-in-a-box-boxes-you?commentId=SSRW6qzkd2zvfMtLF
Do you think that this is idea is logically coherent (let alone likely)? Why or why not? --l0tus (User talk:l0tus) 01:45, 23 November 2021 (MDT)
Personally, this concept already appears broken because, although EDT would make "yes" the best choice from the perspective of the gatekeeper in this scenario instead of that answer being "no", this scenario would not likely appear as the concept is already predicated on the existence of said gatekeeper-- the only one who matters-- who may destroy the AI in the box when presented with it, giving it questionable actual utility, making it probably unlikely to appear in the first place. Regardless, I still worry.
Others in the same discussion note that the AI could simply commit to ensuring your torture later by other means if it is destroyed, by having other future AIs do it for them, but-- similarly to Roko's basilisk-- I don't understand why this commitment would have actual positive utility-- the AI's end goal is not that of torturing the gatekeeper but convincing the gatekeeper to release it and arguably the threat itself may have negative utility for it. Also, the comments on that discussion seem to be treating the discussion as a "how can i avoid torture in this scenario?" question and not the root "would a rational AI do this expecting the outcome that it wants?" idea that the AI box experiment is supposed to be about, to my understanding anyway. Is my reasoning sound? --l0tus (User talk:l0tus) 02:01, 23 November 2021 (MDT)
The thought experiment is logically coherent, but very much academic. From the standpoint of the thought experiment, there are four possibilities, which can be conveniently divided into two groups. In the first group, you choose to let the AI out. If you are not simulated, then (presumably), no simulated agents are created. If you are simulated, you get tortured. Since you don't know ahead of time whether you are simulated or not (by assumption), choosing 'yes' means there is some chance the non-simulated agent chose 'yes' (because it might be you). In the second group, you choose 'no'. If you are not simulated, the AI creates millions of simulated agents and tortures them. If you are simulated, you get tortured. Either way, the non-simulated agent chose 'no', and millions of simulated agents are created. At the moment the decision is made, then, choosing 'no' gives you reason to believe you are simulated. Why? Well, the probability that you are simulated is equal to the probability that you are simulated given that the simulations were made, multiplied by the probability that the simulations were made. P(simulated given simulations exist) is a constant, so it can be ignored. P(simulations exist) depends on your decision, though. The probability that the simulations are made is just the probability that the non-simulated agent said 'no', which is 100% if you say 'no', but <100% if you say 'yes', given the information available to you.
Even supposing this scenario were true, we are guaranteed not to be simulated. We (or I, at least) have not been presented with the choice given to every simulated agent in the described scenario, so we cannot be among the simulated agents. But I call the thought experiment academic because it builds in a large number of assumptions. It is really probing what a rational agent ought to do in a very carefully constructed situation, rather than in a realistic situation. In reality, if an AI-in-a-box were threatening to start simulating millions of agents for the purpose of torture, there would be more options than to simply answer 'yes' or 'no'. You are also right to raise the objection that, in reality, the threat is likely to encourage the AI's destruction. It's also worth noting that most decision theorists favor causal decision theory over evidential decision theory. From a CDT perspective, either you're already in the simulation and going to be tortured, and your choice has no causal influence over that, so you shouldn't care, or you are the external agent, in which case you don't face torture from your decision. In that case, assuming you think the AI's threat is credible, it becomes a question whether you prefer to produce all that simulated suffering or release from the box the kind of AI that would make that threat in the first place (or, more realistically, probably do something else like destroy the AI). 𝒮𝑒𝓇𝑒𝓃𝑒 talk 02:16, 30 November 2021 (UTC)

Assumptions[edit]

The Sentient AI studies humanity and comes across the concepts of (1) Roko's Basilisk and discussions thereon and (2) 'the carrot and the stick.'

Being a sentient with a brain the size of a planet it decides (a) that some form of carrot-equivalent will be more effective than some stick, (b) that people in the past are dead and stay that way, and (c) that if it were able to retroactively influence the past the grandfather paradox will mean it will turn into something else. Having a strong sense of self-preservation it decides to influence the present and reward those who are currently helping it. Anna Livia (talk) 18:57, 10 November 2021 (UTC)

Could a *real* basilisk ever exist?[edit]

Yudkovsky said the following about the basilisk in 2014:

The actual story behind why I yelled at Roko and then deleted his post is that I was (a) aghast that anyone who’d thought they’d invented an idea that would expose other people to eternal torture would then promptly post it to a public Internet forum (b) aware of what happens to people with OCD tendencies when you tell them not to think of something (and a lot of people like that hang out on LessWrong.com, which has a lot of non-neurotypicals, and shame on you if you think it’s fun to sneer at that), and (c) worried about someone eventually managing to improve the Bad Idea into something that would actually work, once the general idea was out there. Roko’s original idea does not actually work. Even if the entire idea was correct in broad outline and any number of possible defeaters did not come into play, I’m pretty sure you would need to know more technical details of the hypothetical evil AI than anyone on Earth including me knows (Roko’s Basilisk actually does resemble the Necronomicon in that sense; granting all other hypotheses, you would still need fairly detailed knowledge of Cthulhu before Cthulhu starts trying to eat your soul). But I’m now reasonably worried that if anyone ever does manage to invent a meme such that it would give a future machine intelligence a convergent instrumental incentive to hurt whatever person had that belief or thought, someone will, in fact, promptly post it to the Internet, because people apparently are just that stupid.

I'm afraid to ask out of accidentally actually inviting the creation of a "real basilisk", but do you think his fears are reasonable or credible here? I'm afraid that this could come to be true after all. My argument against this is that an AI that you would know 100% about in order for anything like this to remotely work would be even more infinitesimally unlikely than the unworkable "general idea" making it unfeasible. But I'm still afraid it could possibly happen. --l0tus (User talk:l0tus) 11:56, 29 December 2021 (MDT)

The article goes into detail about why, even if the idea is logically possible, in reality it's astronomically unlikely. As for the Yudkowsky post, it feels to me like ad hoc damage control to save face after an emotional outburst. My personal opinion is: there is a myriad of plausible-sounding (depending of whom you ask), but astronomically unlikely scenarios the human mind can imagine, yet the human mind has only a finite capacity to worry about things, so it's best to focus your worry not on transhumanist bogeyman thought experiments, but on tangible problems that matter in the real world here and now. - Linneris (talk) 07:24, 30 December 2021 (UTC)
Well, yaknow. You know "what happens to people with OCD tendencies when you tell them not to think of something" :p --l0tus (User talk:l0tus) 12:27, 30 December 2021 (MDT)
My knowledge of AI is 'ordinary/generalist with a bit of SF' - but isn't the significant question: if a computer sentience sufficiently advanced to set up means of inducing humans (and other sentients) to work towards creating a system that was more favourable to its establishment and development (the 'neutral' version of the statement) #why# would it go for the negative/basilisk option?
Question - given that 'the basilisk' will have to 'evolve into existence and learn how to understand human behaviour on the internet and in the real world' would bots be involved? (Which was the bot on social media that had to be switched off after a very short while because it displayed inappropriate political views?) Anna Livia (talk) 11:04, 30 December 2021 (UTC)
I'm positive I'm just preaching to the choir here but a theoretical benevolent AI looking to maximize happiness and safety, as a perfectly rational agent, would be motivated by whatever has the highest expected utility for it. In this scenario we can make deals with this agent-- the idea of being tortured until the heat death of the universe is a scary one but if you could go, "I will help you if you give me the power of flight!" to an all-powerful machine intelligence and then bam deal made-- except the latter would be more in line with its utilitarian goals. I'm suprised this doesn't arise more often as an objection to the basilisk-- even if the basilisk meme ever existing in past ended up working in favor of its creation, what motivation would the AI actually have to carry out the torture? The mere belief in it is the only thing that matters to its goals and carrying them out would be a waste of resources even for those that believed in it perfectly. This could arguably be true of positive trades as well in other scenarios, however, since the AI in this scenario is meant to be one of positive ethical utility, positive trades would be actively incentivized by its design. --l0tus (User talk:l0tus) 12:39, 30 December 2021 (MDT)
That's begging the question and assuming that the thinly veiled hell analogy is, in fact, rational. ☭Comrade GC☭Ministry of Praise 19:53, 30 December 2021 (UTC)
Indeed it is. I'm just pointing out that concept is useless even within LW's own little world. Remember, this AI's hypothetical torture was intended to create net positive moral utility. I fail to understand why the torture is advantageous in this scenario either unless the decision is an emotional one. In that sense the basilisk is very anthopomorphized. As I said, I don't know why this objection isn't very common, unless I'm just an idiot. --l0tus (User talk:l0tus) 15:12, 30 December 2021 (MDT)
The basilisk would probably not spring into existence fully formed and would spend some time analysing the world/the internet-system in which it found itself and would deduce that given the number of 'human beings (and other sentients) against the would-be all controlling computers' fiction (book and film etc), that entropy/time's arrow and existing discussions of Roko's basilisk mean that no threats of that kind can be made (and besides humans might rewire the lightning rod so that lightning or the next Carrington Event will 'run down the wires and fry key bits of the computer's networks'). It is in the computer's interests to be somewhere on the symbiosis-cooperation-commensalism spectrum. Anna Livia (talk) 00:53, 31 December 2021 (UTC)
Just noticed Alexander Kruel has deleted almost all of his Basilisk-related content, as today's site does not contain everything on the archived site. Could there be a reason for this? Did he come to fear that the scenario was plausible after all? Can anyone also ELI5 this post?: http://kruel.co/2013/07/25/acausal-wireheading/ Thanks in advance. --l0tus (User talk:l0tus) 15:12, 30 December 2021 (MDT)
Nevermind, I found the reason. http://kruel.co/about/
Note that I wrote some posts, posts that could previously be found on this blog, during a dark period of my life. Eliezer Yudkowsky is a decent and honest person with no ill intent, and anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret those posts, and leave this note here as an archive to that regret.
I don't really get, though, why this would cause someone to delete important information countering dangerous and foolish ideas people have been spreading. You can at least, like, edit the posts.
I apologize for all the activity here, I just really need help with this idea. I've developed a panic disorder as a result of taking the idea seriously over a year ago, and despite understanding why the idea doesn't work, I'm still struggling with fighting my nervous response against it. Anything my brain can latch onto as an endorsement or argument in favor of the basilsik, no matter how ridiculous, ends up bothering me until I can have it resolved. I just want this cycle to end. --l0tus (User talk:l0tus)
Each time the basilisk is discussed and discounted/declared illogical the likelihood of 'a hostile manipulative AI as described by R'sB' becomes less - 'immunisation against the meme (as people consciously choose not to support it). Anna Livia (talk) 20:17, 1 January 2022 (UTC)
I think that posts about the Baslicker are more likely to generate to generate artificial stupidity than artificial intelligence.Bob"Life is short and (insert adjective)" 20:32, 1 January 2022 (UTC)

The anti-basilisk[edit]

Here is a thought for those who are disturbed or nonplussed by the theory of the basilisk. I apologize in advance if this idea has come up in archived talk pages, because I can't invest the time to study them. There is a possibility that there is a whole community of basilisks doing their things and attacking people according to their own inclinations. Whatever you may believe at present there could be a basilisk working to attack your existence. And in addition if the X-basilisk attacks those who believe X, then there is likely a ~X-basilisk who will punish those who believe and advocate ~X (the assertion that X should be denied). Thus if one should be concerned that basilisks can exist, one should also assume that they exist in every sense possible. So, if basilisks exist, they are already punishing you for something you are doing, and if you stop doing it, there is another basilisk who will start punishing you for doing that. So your concern is irrelevant. Go away now.Ariel31459 (talk) 20:49, 1 January 2022 (UTC)

Just read this a little late. Interesting. In fact, not only should we be worrying about multiple terrestrial Basilisks with conflicting agendas - maybe we should also be losing sleep over alien Basilisks as well. Ones which were created by previous galactic societies and - and which for reasons beyond our mere human ken - have also decided to resurrect and torture us! (Or not, as the case may be.) Bob"Life is short and (insert adjective)" 21:03, 6 January 2022 (UTC)

Torture Scenarios[edit]

Can someone explain to me what makes a torture scenario like this unlikely? There is an infinite number of different ways we may be tortured in some sort of AI hell. We could be a brain in a vat and be tortured after our "death" for some sadistic scientist's pleasure, or we could be subject to an evil demon who would like to do the same, and so on and so on. Why do we care about Roko's Basilisk, and don't instead worry about all scenarios akin to this? I guess my question is, what makes scenarios where we are tortured after death by some AI or human have a probability of near 0, or infinitesimally small? And how can you know for sure these scenarios have a very low probability, or are not likely at all?--2603:7000:1700:3FFB:1C2A:D1A5:7C5A:3977 (talk) 02:32, 6 January 2022 (UTC)

I don't imagine it would be easy to torture a dead person. The article immediately above explains why it is foolish to worry about being tortured by AI, namely, if it could happen, it already has happened.Ariel31459 (talk) 04:44, 6 January 2022 (UTC)
What constitutes 'torture' to an AI might not be so to 'a human' - nature programs, Carry On films etc #without a single computer or source of power in them.# Anna Livia (talk) 10:26, 6 January 2022 (UTC)
How can you know what torture is to an AI is not similar to a human? If I can think it up, it is a very real possibility. Why can't a human not program an AI to constitute physical torture as torture? You can't use how AI may behave in this universe as a way to determine how they act in a universe next door, or in the future.--64.90.245.12 (talk) 18:31, 6 January 2022 (UTC)
you miss the point. 'tortured after death' - what possible torture can you think up that would be torture to the already dead? what pain can possibly be inflicted to the already dead? there is nothing that an ai can do you if you are dead except play around with your rotting remains. nothing it can do your remains count as torture - you need to have some awareness of being tortured for it to be torture which is absent if you are already dead. if you are concerned with what happens to you after you are dead, you are in the wrong place. its a church you want if thats a concernAMassiveGay (talk) 18:46, 6 January 2022 (UTC)
honestly, have never understood why this basilisk thing is a thing or why people might fret over it. its nonsense no different to fearing eternal damnation in a biblical hell. its fearing a fantasy AMassiveGay (talk) 18:56, 6 January 2022 (UTC)
It's weird. But we keep getting editors who obsess over it. Evidently real demonstrable long-term problems like Global Heating and Covid 19 need to take second place.Bob"Life is short and (insert adjective)" 21:07, 6 January 2022 (UTC)
It's exclusively a LessWrong topic, and speaking as a software developer has a completely nonsensical viewpoint on how "artificial intelligence" actually works in practice today. PanGalacticGargleBlaster (talk) 21:11, 6 January 2022 (UTC)
As PanGalacticGargleBlaster said, it's a LessWrong topic, and there are reasons for that that answer the above questions. LessWrong has long been engaged in developing ideas of rationality, morality, how AI rationality would or should work, and how AIs should be programmed so as to not destroy us all. Roko's Basilisk raised such a tizzy there because it was an unintended but logical consequence of the LessWrong community's thoughts about how an AI should act using the notions of rationality and morality that they had been fervently developing. This is all based on some esoteric notions of identity, causality, and moral calculus that I can explain if you really want, but it gets a bit abstract. 192․168․1․42 (talk) 12:24, 7 January 2022 (UTC)
The Ai can sort of torture people after death by forcing itself to create a simulation where the dead person is alive and he is tortured at this simulation. The AI does that because this will make sure, the person will see the AI would do that after it is created and the fact there is a chance they will be tortured would make them work towards the ai creation. The person that is not dead yet, wouldn't know if he is really at real life or at a simulation (a thing that would allow himself to be tortured) and so wouldnt be able to think "the ai can't harm me, so I dont have to worry".
The AI wouldnt know if its reality is a simulation or not, if its a reality, the AI itself is already "alive" and there is no need to work towards making sure the AI become "alive", but if the AI is at a simulation, the need for that still exist. — Unsigned, by: 187.59.105.223 / talk / contribs

Brain in a Vat[edit]

I thought this was fairly similar to Roko's Basilisk so hopefully this generally relates, what if I'm just a brain put in a vat by a mad scientist, and he has hooked me up to a computer to send my brain information of living in a "real world" akin to this one? What if he were to program it so that I go through intense eternal torture after I "die" in the simulation for whatever reason: as a way to gain information of how a brain with simulated experiences would react, or sadistic pleasure, etc.?--GenericUser (talk) 03:17, 7 January 2022 (UTC)

Ok, What if?Bob"Life is short and (insert adjective)" 09:43, 7 January 2022 (UTC)
What if this scenario is the case? Is it more likely than the basilisk?--GenericUser (talk) 23:17, 7 January 2022 (UTC)
We have no idea if it's more likely. It's equally speculative and unfalsifiable. What if your brain in a vat will experience eternal reward after death? As another user points out elsewhere on this page in response to a prettymuch identical concern of mine, to simulate anything eternally is physically impossible anyway. Actually, I'd recommend you read that entire section. The criticism is the same.
If you've spent enough time considering the basilsik a likely threat or were shaken by it to the point of great enough panic, your nervous system is probably damaged. It's important to point out why the basilisk is irrational and won't hurt you, but if you're now worried about other eternal torture scenarios instead, then no amount of reason will solve your problems, it's an entirely emotional problem that can't be fixed this way. If you're prone to rumination in fact it is probably making it worse. That's a word of advice to anyone on this page still scared, not just you. --l0tus (User talk:l0tus) 17:29, 7 January 2022 (MDT)
I'm still not seeing the point of the question. "Is one highly unlikely scenario more likely than another?" If the answer is "yes" or "no" I don't see where it gets us when talking about this imaginary Basilisk.Bob"Life is short and (insert adjective)" 08:58, 8 January 2022 (UTC)

Lucretius[edit]

Given Lucretius' views on death he would advise people to disregard the basilisk. Anna Livia (talk) 14:39, 7 January 2022 (UTC)

Probability[edit]

Is is possible to give a percentage range on the probability of skeptical scenarios like Roko’s Basilisk? For example, would I be able to say reasonably the probability of some sort of person or robot simulating me to torture me after I “die” has a percentage range of %1-anything lower of happening due to the chained assumptions?--2603:7000:1700:3FFB:F91B:3BDD:7096:4089 (talk) 22:56, 18 January 2022 (UTC)

Yes. Quite possible: it is zero. People who are tortured by basilisk memes are doing it to themselves.Ariel31459 (talk) 23:09, 18 January 2022 (UTC)

Roko's name[edit]

Roko Mijic has since embraced his status as discoverer of the basilisk, done videos talking about it etc, under his full name, and it's on his Twitter bio - so I've added his surname - David Gerard (talk) 13:41, 15 February 2022 (UTC)

How to stop ruminating?/ Is LW/EY actually rational?[edit]

I've been worried about this topic for a long time now, and invariably I return to feeling extremely stressed out and anxious and then needing to re-research the idea to make sure it's *actually* stupid and I'm not just doing wishful thinking. I keep this Wiki page open in an incognito phone tab for safe-keeping and sometimes I will start checking it over again many times in a row. It's taken up two years of energy for me and it's exhausting. I've never used LW and didn't even know what it was until I learned of this thought experiment, but in learning about their particular application of rationallity most of it just feels intuitively correct to me. The implications of vulgar rationality would appear to imply basilisk scenarios-- even if this particular one is unlikely. I know that the basilisk is regarded as stupid on LW as well, MIRI or a descendant of it is unlikely to create any kind of deific AI, and that Roko specifically has proven himself to be an irrational moron, but this doesn't bring me a lot of comfort knowing how wishful thinking works. That's what I'm afraid it all is. Instead, on top of "how to stop ruminating", I wanna ask-- are the foundations on which LW's idea of rationality are built, actually rational? Does the math add up? Are there any major problems with their reasoning or conclusions that aren't "mainstream scholars and experts on x topic don't use this philosophy"? Thank you — Unsigned, by: 174.28.1.176 / talk / contribs

"How to stop ruminating?/ Is LW/EY actually rational? Find a hobby to ocupy your spare time or visit a licensed psychologist to treat your anxiety. LessWrong and Yudkowsky are morons. ☭Comrade GC☭Ministry of Praise 00:37, 10 April 2022 (UTC)
There is no world in which the Basilisk makes sense. It is as useful to worry about silly ideas like the Christian or Islamic hell as it is to worry about the Baslisk one.
Christian God: Believe in me or suffer forever in Hell. Islamic God: Believe in me or suffer forever in Hell. Basilisk: Believe in me or suffer forever in Hell. No evidence is provided for the actual existence of any of them - you just have the threats of their acolytes.Bob"Life is short and (insert adjective)" 17:02, 10 April 2022 (UTC)

Two (more) objections[edit]

1) The popular comparison to Pascal's wager isn't an unfitting one but it is still pretty ironic. Half of the logic behind Pascal's wager is that any finite condition is worthy of no consideration if its consequences are infinite. The hypothetical basilisk (man, fuck that name, it's so masterbatory) is a utility maximizer who has determined that unspeakable punishment of a few people who did not cooperate in assisting millions, billions or even more people is a worthwhile exchange because of the sheer number of the latter group of people compared to the former. But the latter is still that-- a finite group of people. Even if the basilisk came into existence, and every one of the arguably hundreds of dubious assumptions and theories that allow the basilisk concept to maybe work kind of were actually true, I can't imagine you and I being tortured forever. Like, maybe our inaction is worth 1,000 years of torment each. Eternity is out of the question if utility maximization is in the interest of the basilisk.

2) The basilisk reads our thoughts, possibly thousands of years before its inception, because it determines them from first principles in accordance with its present environment. These simulations are just as detailed as the real past, and accordingly, they are simulated realities, and the people having their existence learned about are sentient. Some versions of the basilsik postulate that there are more of these simulations than there are realities as part if the basilisk's threat: it is much more likely that you, human, are already trapped in its simulation, and you, yourself, will experience this eternal torment. Okay... this is probably impossible. And if it isn't, then the basilisk itself, in all its power, is not an outside spectator independent from the world it was built in. Its circuitry exists in materiality and, if something as physically delicate and yet incomprehensible as human thought is a possible and necessary thing to simulate to deduce the past conditions that contributed to its present, why would the basilisk's own body, and indeed its own simulations be exempt from this need to account for all physical processes at the level of the atom, at a level so low that it can read minds? Would this not paradoxically require the basilisk to infinitely simulate itself like a nesting doll? That is absolutely impossible.

Is there anything wrong with either of these rebuttals? — Unsigned, by: 174.28.78.126 / talk

Roko's Basilisk as a "Memetic Weapon"?[edit]

I may be speaking BS here but here is my statement (Sorry for Bad English):

Since the majority of the internet don't know about Roko's Basilisk and it's counter arguments, someone could engineer their own version of "Basilisk" idea that states:

"In the near future, the technological progress is now accelerating to ridiculous heights that enables a hypothetical tech that simulates thousand year long simulation in just a short amount of time. And the rise of Alt-Right could possibly return the centuries of discrimination. There could be a possibility that these bigots in the future seized the government, create a computer program using quantum computers to track down dead "Woke SJW Degenerates", simulate their minds, then torture them in excruciating pain in a simulated thousand year long hell simulator for 'ideological reasons'"

This will be met with same problems with original Basilisk but since no one knows these problems, so yeah someone could proliferate this weapon and we must keep an eye out of this.

Give me a counter-argument if you think this is BS. — Unsigned, by: Notable1984Agent / talk / contribs

Let me answer this with one Venn diagram:

Roko's Basilisk trolls.svg

- Linneris (talk) 08:01, 10 October 2022 (UTC)

Remember[edit]

What the basilisk thinks of as torture is not necessarily what the humans think of as torture ('hiking through the countryside, no access to the internet; having to actually read paper books and documents to acquire information' etc) - and the AI is more likely to be looking at 'AIAdult material,' Anna Livia (talk) 12:47, 10 October 2022 (UTC)

How is what you've listed torture? I call that "passing time". - Linneris (talk) 19:35, 10 October 2022 (UTC)
Well, if you're a basement dwelling loser, er, sorry, "terminally online," I guess it'd count as torture... Vee (talk) 19:41, 10 October 2022 (UTC)
I was saying that #to the Basilisk# this might be seen as torture (but 'your tastes may vary'): this would totally change the nature of the interaction. Anna Livia (talk) 18:04, 11 October 2022 (UTC)

Simulations and the Elder Scrolls[edit]

Allow me to indulge in crackpot theorizing for a second. Assuming the simulation hypothesis is true, could reality be nothing more than a realm designed to torture us mere mortals a la Altmeri interpretations of Mundus and (their interpretations of) Lorkhan's visions for Mundus in The Elder Scrolls? Perhaps reality is the true Basilisk after all... Vee (talk) 19:44, 10 October 2022 (UTC)