Talk:Roko's basilisk

From RationalWiki
Jump to: navigation, search
Icon internet.svg

This LessWrong related article has been awarded GOLD status for quality. Please keep this in mind when editing the article. See RationalWiki:Article rating for more information.

Goldenbrain.png
Information icon.svg Cover Story
This article is, among others, randomly included on the Main Page.
Please keep this in mind and be sure that your edits are of the quality that this implies.
Its front-page abstract can be found here and its editnotice here.
Archives for this page

See also the original post.


Utilitarianism[edit]

As someone who has started worrying about the basilisk a week ago, I’m confused how this sort of AI wouldn’t follow through with torture. If it puts utilitarian ethics to use, wouldn’t it then follow through with the torture? Even if I choose not to participate in making it come into existence? Wouldn’t it have just enough resources to do as such?--Oblivious (talk) 23:51, 18 March 2021 (UTC)

Consider the moment at which the AI is complete. Ask yourself, would going through with the torture at this point lead to a net increase in total good? If it would not, then why would a utilitarian AI do the torture? If so, where it the additional utility coming from? 68.56.144.8 (talk) 00:31, 19 March 2021 (UTC)
Utilitarianism is a method of evaluating ethics and morality, not a goal system itself. The basilisk has had staying power as a discussion topic largely because it exists in a flaw in the methodology and goal structures typical of these sorts of discussions on LessWrong. They believe that a benevolent AI should be programmed with certain goals and analytical methodologies, which have the logical implication of a basilisk. If an AI is programmed differently, it would behave differently. 192․168․1․42 (talk) 10:14, 20 March 2021 (UTC)

Is torture rational?[edit]

Would there really be any real reason to torture past people?--2603:7000:1702:99CE:B189:37D7:81F5:A7A5 (talk) 17:07, 20 March 2021 (UTC)

No and no. ☭Comrade GC☭Ministry of Praise 17:07, 20 March 2021 (UTC)
Why do you keep asking questions about this as if it were a real thing? It isn't.Bob"Life is short and (insert adjective)" 17:34, 20 March 2021 (UTC)
There are two kinds of rationality, epistemic and instrumental. Epistemic rationality is about getting correct information. Is torture epistemically rational? It doesn’t reliably produce correct information, so no, at least generally speaking. Instrumental rationality is about achieving goals. Is torture instrumentally rational? Depends on what the goals are. Torture has a variety of consequences. If they generally align with the goals you want to achieve, it can be rational. As an example, consider the part in The Empire Strikes Back where Han and Leia are being tortured. Han comments that they didn’t even ask any questions. But Vader didn’t want information, he wanted to bait Luke into a trap. And as a Force-sensitive friend of Han and Leia, there was a good chance that he would pick up on their distress wherever he was and come to save them. Torture in that situation was rational. In the basilisk scenario, the goals and analytical methods favored on LessWrong produce a rational basis for the AI to conduct torture. 192․168․1․42 (talk) 18:17, 20 March 2021 (UTC)
In this completely fictional scenario the AI already exists, torturing people won’t retroactively make it exist any sooner. Torture isn’t rational in this context. Christopher (talk) 18:21, 20 March 2021 (UTC)
According to your personal goal structure and analytical methods, no. But those are not the only possible options. As outlined in the article's "Background" section, LessWrong has a rather esoteric view of those topics. Change the AI's goal structure and its analytical methods, and you change what it considers to be rational. 192․168․1․42 (talk) 18:29, 20 March 2021 (UTC)
If your “goal structure and analytical methods” ignore causality, they’re wrong. You might as well worry that an AI will torture you because it’s racist. Christopher (talk) 18:34, 20 March 2021 (UTC)
As I said, rather esoteric. 192․168․1․42 (talk) 18:39, 20 March 2021 (UTC)

Putting the question another way - what could Otzi the Iceman have done differently?

And - given that 'currently existing computer programmers (at the time of the AI coming into existence)' might well develop AI Mark 2 (without the questionable behaviour) which would replace the basilisk-entity, surely it would be in the basilisk's interest to torture #them#? And would the AI's definition of torture be the same as that of the carbon-sentients? Anna Livia (talk) 19:15, 20 March 2021 (UTC)

To avoid torture[edit]

“ One way to defeat the basilisk is to act as if you are already being simulated right now, and ignore the possibility of a negative incentive. If you do so then the simulator will conclude that no deal can be made with you, that any deal involving negative incentives will have negative expected utility for it; because following through on punishment predictably does not control the probability that you will act according to its goals.”

Can someone explain this to me a little more clearly? Im not exactly sure how punishment doesn’t control the probability that i will act according to the basilisks goals.--Oblivious (talk) 21:55, 21 March 2021 (UTC)

Basically, the whole point of the torture is that people now will predict it and take it seriously, so that they will actually respond to the threat of it. But if you predict it, don't take it seriously, and ignore it, then the deal doesn't work, and in fact is rendered so obsolete as to nullify the AI's motivation to do it in the first place. You are probably more worried about this than you should be. First, even taking every claim made in the argument very seriously, the odds that such a machine is ever created are extremely small. More importantly, though, it is not clear that the claims in the argument should be taken seriously in the first place. The AI is assumed to be implementing a particular value system, which is now considered obsolete even by its creator, and the whole idea is also dependent on timeless decision theory, and extremely vague, incomplete idea which may not even be coherent, and acausal trade, which again is pretty vague, and not clearly coherent or complete. It is unlikely that anybody is going to be able to give you a very detailed response to your questions, because you are asking about an obscure idea proposed by somebody on the basis of incomplete, esoteric ideas put forward by somebody who may or may not have been greatly out of his depth when he proposed them. These ideas have not been widely taken up by experts and have not created much concern outside of the online community where this problem was first presented. Basically, in order for somebody to give a complete explanation of the AI's justification for torture to you, they would have to invent the underlying theories that the thought experiment is based on, because they were never actually described adequately in the first place. 68.56.144.8 (talk) 22:18, 21 March 2021 (UTC)

Jeremy Bentham and the Basilisk[edit]

The logical question, applying Bentham's calculus of pleasure - what benefit does the AI/basilisk get out of 'torturing' (as it defines it) simulacra of (dead) people? People doing 'hatchet jobs' get some benefit (royalties from books/payments for articles, publicity etc) - there is no obvious equivalent for pursuing the basilisk. Anna Livia (talk) 13:18, 23 March 2021 (UTC)

The makers[edit]

Are we assuming that the makers of the AI wouldn’t program the AI to torture, whether its reasonable or not? And even if not, several decision theories allow one to follow through on acausal threats and promises.--Oblivious (talk) 21:11, 23 March 2021 (UTC)

Do you have a source for the claim that several decision theories allow following through on acausal threats and promises? 68.56.144.8 (talk) 22:05, 23 March 2021 (UTC)
I ask because a search for "acausal promise" brings up 12 step programs and something about the Sun, for "acausal threat" brings up LessWrong and RationalWiki, and for "acausal decision theory" brings up causal decision theory and LessWrong. Granted, it brings up multiple LessWrong theories, including TDT, UDT, ACDT, and FDT, but all of these theories belong to the same insular community, and none of them seems to have generated any interest outside that community. Yudkowsky's most recent theory seems to be FDT, and he seems to have gone beyond LessWrong with it, but only into his own comparably insular world, and the only paper he seems to have put forward on it is disappointingly vague. The reality is that if these theories were actually interesting alternatives to causal decision theory and evidential decision theory, by now they would have been picked up by decision theorists. If you have found a theory that can accommodate these ideas that didn't spawn from LessWrong, that could be interesting, but I cannot find anything. 68.56.144.8 (talk) 00:21, 24 March 2021 (UTC)
Second paragraph of this article says it: https://www.lesswrong.com/tag/rokos-basilisk

I honestly tried to look myself if that claim was true, and couldn’t find anything. So I just posted my comment here to see if anyone knew more about it. Besides that however, wouldn’t future people program an AI to torture?

So, the best interpretation I can figure out so far is this: the AI is expected to immediately determine that timeless decision theory is optimal and immediately precommit itself to torture people who have access to certain information and respond to it in a certain, sub-optimal way. The precommitment guarantees the torture and legitimizes the threat. So even though the AI isn't programmed to torture, the underlying utilitarian value system that is programmed into it allows it to do so (at least according to Roko). Personally, having looked at the problem a bit, I find that the principle that simulations of you are you is the most obviously problematic point, partly because it can be addressed without going into the details of decision theory. Consider a thought experiment: two identical twins are raised under perfectly identical conditions, so that even after reaching adulthood each is exactly the same as the other in every way. Assume that whatever conditions necessary to enforce this situation are met (even though this would be highly unethical to do in real life). We'll also suppose for simplicity that the twins are isolated from each other, so that although each knows of the other's existence, they cannot directly interact with each other (i.e. they live on opposite sides of the state/country/world). One day, I go up to one of them and start beating him with a stick. Since the twins are the same in every way, this twin is effectively a simulation of the other one, and so beating him with a stick should, under the reasoning used in Roko's basilisk, qualify as beating the other twin with a stick. Now, does the other twin actually experience any form of pain or torture as I am beating his twin with a stick? To me, it seems that the answer is no. Separating the simulations in time should have no impact on whether this is true. For instance, consider the same thought experiment: both twins are the same in every way and have had all the same experiences, except one of them has had every experience and performed each action 10 minutes after the other one. If I beat the one who does everything first with a stick, does the other one experience pain 10 minutes later? I see no reason to think that the torturing of the one counts as torturing the other, and thus Roko's AI torturing a simulation of you should not count as torturing you either. If you are still worried about the basilisk, here's another thought experiment to consider. Suppose I create my own AI research program. My program seeks to construct an AI that will torture (simulations of) people if and only if Roko's AI would not torture those people. I do not claim that it is utilitarian, but it is just as conceivable as Roko's, and just as possible to construct. It should therefore be similarly concerning. Here's another possibility: an AI that tortures (simulations of) people if and only if Roko's AI is torturing those people, and tortures them more intensely that the basilisk. Thus, if the basilisk tortures somebody, my AI guarantees that they experience more torture than the basilisk has deemed appropriate, and it cannot fix the problem by reducing the amount of its torture to balance my AI, since my AI always tortures more than the basilisk would, or not at all. If necessary, my AI can simply torture many copies of the simulations at the same time, to guarantee that it causes more suffering than the basilisk. This is effectively a form of Pascal's wager: if you are worried about the basilisk, then you have equal reason to be worried about an infinite series of possible alternatives, each of which will torture you under specific, but different conditions and each of which also carries some possibility of arising in the real world (at least as much possibility as the basilisk has). 68.56.144.8 (talk) 12:57, 24 March 2021 (UTC)
So then why shouldn’t we all worry about the infinite possibilities of an AI torturing us just for the hell of it?--2603:7000:1702:99CE:F9BF:DF5B:B99A:F088 (talk) 00:24, 25 March 2021 (UTC)
Firstly, considering the twin example (among other possibilities), you shouldn't be worried unless you truly consider a simulation of you to be the same as you. Second, even if we consider the combination of all these possibilities, there is a high likelihood that none of them will ever actually come to pass. Even considering only cases in which such an AI is actually built, for each case in which you are tortured, there is at least one in which you are rewarded, and one in which you face no special consequences. Third, if you are going to worry about highly abstract theoretical future AIs torturing you forever, you should be even more worried about the possibility of a judgmental god or gods torturing your particular individual soul after you die. Indeed, Roko's basilisk is effectively an instantiation of a form of god in machine form. Abstract the Abrahamic God a bit, and consider Him as a being that, upon your death, judges the actions you took in life an sentences you to either eternal happiness in heaven, or eternal torture in hell. Arguably, you have at least as much reason to believe in such a god as you do in Roko's basilisk. Fourth, if you are going to worry about these things, then you should probably also take note from Hume's skepticism about causation, and constantly be worried that everything that has ever happened was simply accidental coincidence, and the whole universe will end imminently, maybe tomorrow, maybe this very hour. After all, there is no purely logical argument from "A has always been followed by B" to "A will always be followed by B." On the other hand, if you feel pretty confident that, for instance, the Sun will rise tomorrow, on the grounds that it always has, then you probably shouldn't be worried about this at all. Some things you can just take as given, without worrying about infinitesimal probabilities. The anxiety you create in yourself worrying about such things will torture you far more than any of these hypothetical harms ever will. 68.56.144.8 (talk) 01:10, 25 March 2021 (UTC)
Also, note that the probability of Roko's basilisk really is infinitesimal (i.e. 0), as for each possible world containing Roko's basilisk, there are infinitely many possible worlds not containing Roko's basilisk. The same argument can be applied to any particular hypothetical AI, which is why you also should not celebrate in anticipation of infinite future pleasure at the hands of some benevolent AI. Also, note that perfect simulation is impossible, because things are not well-behaved on small scales. Also, note that even if perfect simulation were possible, torturing a perfect simulation of you after you die would be impossible. Either the simulation is of you at some point during your life, in which case the simulation diverges at the point where the torture begins rather than whatever actually happened in your life, and is therefore not you, or the simulation is of you after having all the experiences of your life, which is then not a simulation of you, since after having all the experiences of your life, you were not transferred to eternal torture: you died. Furthermore, the latter case can be even more easily rendered moot: when you are about to die, take a substance or undergo some process that renders you incapable of sensory experience without killing you. Thus, a simulation of you after that point in life could not be tortured. 68.56.144.8 (talk) 01:29, 25 March 2021 (UTC)
Are you trying to say that ridding of all senses before dying would cure the problem to Rokos Basilisk, despite it being impossible to simulate past people? And about being impossible, whose to say it will not be possible in 100 years, or whenever singularity is reached, to resurrect people?--Oblivious (talk) 20:20, 26 March 2021 (UTC)
“Singularity” won’t ever be reached, and it will never be possible to resurrect people. I’m not saying you are, but you sound absolutely insane. Christopher (talk) 20:33, 26 March 2021 (UTC)
I am saying that there are multiple problems with Roko's basilisk. Even if it were possible to simulate past people, you could still solve the problem by ridding yourself of all sensory experience. I agree with Christopher that singularity will never be reached, but even assuming it did, it would not be possible to resurrect people. Again, things are not well-behaved on the small scale: information is lost over time, and given that the brain contains a lot of fine structure, it wouldn't take long before trying to revive it would be a lost cause. Just for fun, here's another case that poses problems for the basilisk. Consider an AI similar to Roko's basilisk. It has the capacity to simultaneously simulate every human consciousness at once, with some computing power left over. Instead of simulating all these people, it could devote all these resources, plus the extra, to its own mind. The AI thus has a greater capacity for happiness than all people combined: if it can simulate all people being happy, than it can have happiness equal to all this happiness (this is perhaps still controversial, but I think Roko would accept this as true). Now, the AI is superintelligent, and thus recognizes that if people try to build a superintelligence, they will probably build one that is incompatible with human values. This AI, like Roko's basilisk, is programmed to have human values, and thus considers these probable AIs that would violate human happiness undesirable. Whereas Roko's basilisk precommits, under TDT and acausal trade, to torture (simulated) people who do not devote all their resources to the development of the basilisk, my AI follows a different calculus, which is more accurate. In light of the fact that the effort to develop a benevolent AI will probably lead to the development of an amoral or malevolent AI, and that such an AI would vastly reduce human happiness, my AI precommits, under TDT and acausal trade, to devote all its resources to self-torture if created. Since this results in more torture than would be achieved by torturing simulations of everybody, this threat is more severe on utilitarian grounds than that of Roko's basilisk. Therefore, the moral thing to do is not to contribute to AI research, and simply continue living your life in peace, since contributing to AI research will hasten the development of my AI, and it is wrong to create my AI. No need to worry about Roko's basilisk, because my AI is simply a version of the basilisk that accounts for additional relevant information, and thus supercedes the basilisk. 68.56.144.8 (talk) 21:24, 26 March 2021 (UTC)
I know it may seem that way, but in no way do I advocate this thought experiment. I simply want to understand why people believe this, and why others don’t. There are two sides to every story, which is why I would like to object to anyone who disagrees to understand this thought experiment better.--Oblivious (talk) 21:37, 26 March 2021 (UTC)

Quantum Immortality Relating to the Basilisk[edit]

Quantum Immortality is the theory that, for example, if you got into a deadly car crash and died, your consciousness would be transfer to an alternate universe within the multiverse where you survived. This would keep happening throughout your life in life and death situations, basically meaning that you are immortal. If this theory is true, and I cant die, couldn’t I be transferred to a universe where I continue to live through a simulation of myself that gets tortured via evil AI? I know it sounds crazy, but quantum mechanics puts the idea on the table.--2607:FB90:12C6:55DE:81C3:C94F:8D79:4643 (talk) 19:07, 1 April 2021 (UTC)

The Basilisk is stupid on a conceptual level. It's basically just Pascal's wager for techbros. It's dumb, the premise is riddled with flaws, and it requires subscribing to LW's rather insular culture in order to begin to make sense of it (don't, they're dumb and talking out of their asses most of the time). ☭Comrade GC☭Ministry of Praise 19:16, 1 April 2021 (UTC)
Yes. It's one of the dumbest ideas on the internet - yet we seem to have a constant steam of people who want to believe it's real. Weird.Bob"Life is short and (insert adjective)" 19:28, 1 April 2021 (UTC)
There's a joke about organized religion in there somewhere.-Flandres (talk) 21:04, 1 April 2021 (UTC)