There is no RationalWiki without you. We are a small non-profit with no staff—we are hundreds of volunteers who document pseudoscience and crankery around the world every day. We will never allow ads because we must remain independent. We cannot rely on big donors with corresponding big agendas. We are not the largest website around, but we believe we play an important role in defending truth and objectivity. |
Fighting pseudoscience isn't free. We are 100% user-supported! Help and donate $5, $10, $20 or whatever you can today with ![]() ![]() |
Talk:Roko's basilisk
- Archives
- See also the original post.
It seems there are a lot of misconceptions, or maybe I'm just missing something? Help?[edit]
So from what I understand, essentially the way RB works, is about RB having already a copy of you being tortured, right now, in the future, and that should make you worry (and, I might be wrong here idk, but the LW community even believes that doing something at some point, can affect the past in some way, like in general).
And It seems to me, people wrongly adopt this view RB will just simulate the whole thing, a whole new universe, and you might be the copy. But that's not in RB, and I don't know about TDT, UDT or whatever LW does, but If I recall correctly this is due to the "AI boxes you" thing? And then people just implent it to the concept although Roko had no intention of making it like It?(And that's like it's not even RB anymore, it's just some new stuff you came up with, thinking it might have some rationale, because It came from something that attempted to look rational).
I'm not even sure where the multiverse thing even starts, and I also read (Page 6 of this archive) that "there are more simulations than realities", and I don't know if that stems from the "AI boxes you" concept too.
I also recall hearing that RB might "Time travel", I don't know what's RW view of this...
I honestly think the whole thing is nonsense, but I can't help myself but ruminate sometimes, and I might also suffer from GAD (And I've know about RB for a while, I just recently came back here and by re-learning some stuff, learning some small detail, looks like it made me stupid again), and wanted to get some of this stuff out of my head. I would appreciate really much an answer.AnAccount123 (talk) 22:37, 28 November 2024 (UTC)
- It's complete nonsense from start to finish. So trying to make coherent "sense" of it is simply a waste of time.Bob"Life is short and (insert adjective)" 09:23, 29 November 2024 (UTC)
- So essentially it's just impossible to give it a "standard narrative", in spite of how non-sensical It Is? My point with "standard narrative" is that it just seems that people Who get worried about this, just grasp the concept from a different angle you'd expect him to, and that might be because (It seems to me) the concept/ideas appear to be too shattered (I don't know if I'm being clear here). And it gets distressfully confusing for them. This is why I'm asking where/how some idea fits in this, or how much weight some idea has (as I mentioned in my previous edit).AnAccount123 (talk) 21:19, 29 November 2024 (UTC)
- The way Roko’s Basilisk is supposed to work is like this: Possibly, in the future, an AI will be developed that (i) is positively disposed toward human beings and (ii) subscribes to Yudkowsky’s “timeless decision theory”. This AI comes to conclude that the best thing for human well-being is itself. So, it would have been better for humans if the AI had been invented earlier. According to Yudkowsky’s theory, the AI can then reasonably make a ‘’retroactive’’ threat against people who lived in the (possibly distant) past, that they must devote themselves to its development or it will torture them forever for failing to speed up the invention of the AI. Roko imagines that this punishment is accomplished by producing a simulation of you and torturing ‘’that’’.
- This is all very silly, in large part because “timeless decision theory” makes little sense, but also because Roko is committed to additional metaphysical claims that are, at the very least, controversial. To the extent that there is confusion among people worried about this, it is probably because the thought experiment implicitly relies on controversial claims and a bizarre “decision theory” (I use scare quotes because TDT was never described in detail or given an explicit rigorous formulation. Besides that, it was motivated by commitments that are rejected by more-or-less all academic decision theorists). Most people, having interpreting everything ‘’correctly’’, will find that the thought experiment does not motivate concern. From what you write, it sounds like people are looking for a reinterpretation that makes Roko and others’ fears more comprehensible. 𝒮𝑒𝓇𝑒𝓃𝑒 talk 22:52, 29 November 2024 (UTC)
- It's really a bit like asking for a "standard narrative" of bigfoot reproduction or the development of fairy languages. Given that the base assumptions about these beings actually existing are incorrect - and actually rely on faith rather then evidence - it's unlikely that you are going to get consistent narratives based on such flawed initial assumptions.
- Another example would be people looking for ways to explain the Great Flood. We have no evidence that The Flood happened and plenty of evidence which demonstrates that it didn't. So looking for some kind of standard narrative which somehow reconciles the biblical account with demonstrable reality is an utter waste of time.Bob"Life is short and (insert adjective)" 15:05, 30 November 2024 (UTC)
- So to respond to Irene, not really like that, what I mean is that people come across confusing (and ultimately erranous) concepts of RB, and it's more like their anxiety that It Is giving RB a reinterpretation and making fears more comprehensible. In talk Pages there are many people being worried about "being the copy" and multiverse, and I raised the question about this coming from "the AI boxes you" idea which is referenced in the article (in my initial edit, where I also mentioned someone talking about time travel or there being more simulations than realities, if that didn't make sense at first, I hope It Will make more now). It's like: "A" just learned about RB and is finding things confusing; "B" knows RB well, and Is arguing about many concepts scattered in and there, which "A" doesn't fully get; "A" could anyway find the idea silly, but when he talks about it with other people, you have C,D,E hearing his narrative, in which "A" he grasped the concepts in a Little bad way, and you have C,D,E grasping concepts in an even worse way. And ultimately It seems to me there's a big misconception around this, and those misconceptions are even less rational, but people probably don't realise It due to the fact that their basis Is something that attempted to look rational (and this could be the main thing of my point). I hope I made myself clear.
- As for Bob "It's really a bit like asking for a "standard narrative" of bigfoot reproduction" LOLAnAccount123 (talk) 21:53, 30 November 2024 (UTC)
- some people are convinced the world is fun by shape shifting reptilians, some people we swear blind the earth is flat. others insist the moon landings were faked, while others still look at the pyramids and think there is no way ancient egyptians built them without the aid of atlantian/alien super technology. far too many think all of the above, and its just so obvious that its all true. all on the most flimsiest of evidence to prove it, while the most basic of a google searches turning up reams and reams actual evidence and proven science saying otherwise can never be incontrovertible enough to disprove to them their half arsed assumptions and assertions are wrong.
- i wonder if robos basilisk is just the shapeshifting reptilians of a different demographic? maybe one that can do maths or something? but serving the similar purpose of distracting the believers from thinking about the scary real world issues that seem to chaotic and out of our control to want to accept? AMassiveGay (talk) 23:25, 30 November 2024 (UTC)
- That's not quite what I'm saying thought, my point is more about misconceptions due to concepts which seem to "scattered", not people that actively want to believe this and make sense of It. Damn, genuine question, is everything that I wrote so unclear? I might re-write my entire point and try to make It more concise and coherent, if that's the case.AnAccount123 (talk) 20:01, 1 December 2024 (UTC)
- If various people have already responded to it then you should not edit it. Rewrite it again below.Bob"Life is short and (insert adjective)" 20:03, 1 December 2024 (UTC)
- Nah honestly, I think ultimately I won't do it or it's not needed really. I would just like to know people's take in here, for this specific part of my very first edit:
- ~"I'm not even sure where the multiverse thing even starts, and I also read (Page 6 of this archive) that "there are more simulations than realities", and I don't know if that stems from the "AI boxes you" concept;
- I also recall hearing that RB might "Time travel", I don't know what's RW view of this..."~I would just appreciate this, I don't think I have much else to add other than this.AnAccount123 (talk) 13:13, 2 December 2024 (UTC)
- it seems to me the idea of rb creating simulations is because there is no real way to punish folk who have probably long since deceased and rb being able punish us seems the whole point of this nonsense. they went with simulations that are so real, or really so magic, that it really is us. its so much arse, and remains so much arse if people wail about 'what if we in the simulation now? its so perfect we cant even know'. i assume the time travel idea is just another stab at pretending it isnt all nonsense. not convinced by hyper simulations of dead people or entire realities? it might travel in time and come get ya. rb becomes skynet at this point. there is nothing rb cannot do it seems. what if wormholes? it is magic and only limited by our imaginations and make believe. ::::::::::it seems maybe possible, as a thought experiment, one is not supposed to think about these details of how rb might do the punishing. just assume that it can, its not important, the point is to discuss this 'timeless decision theory' or something. it would be like being presented with the schrodingers cat thought experiment and becoming obsessed with what breed of cat it is. perhaps the fault is rb is a rubbish thought experiment or perhaps the fault was presenting it to an internet forum where people will always fixate on the details of the hypothetical scenarios and the intended purpose of such experiments are never addressed. its still bizarre to me that this is still a thing or that it ever was in the first place. AMassiveGay (talk) 14:46, 2 December 2024 (UTC)
- If various people have already responded to it then you should not edit it. Rewrite it again below.Bob"Life is short and (insert adjective)" 20:03, 1 December 2024 (UTC)
- That's not quite what I'm saying thought, my point is more about misconceptions due to concepts which seem to "scattered", not people that actively want to believe this and make sense of It. Damn, genuine question, is everything that I wrote so unclear? I might re-write my entire point and try to make It more concise and coherent, if that's the case.AnAccount123 (talk) 20:01, 1 December 2024 (UTC)
- So essentially it's just impossible to give it a "standard narrative", in spite of how non-sensical It Is? My point with "standard narrative" is that it just seems that people Who get worried about this, just grasp the concept from a different angle you'd expect him to, and that might be because (It seems to me) the concept/ideas appear to be too shattered (I don't know if I'm being clear here). And it gets distressfully confusing for them. This is why I'm asking where/how some idea fits in this, or how much weight some idea has (as I mentioned in my previous edit).AnAccount123 (talk) 21:19, 29 November 2024 (UTC)
Yeah, exactly, also it diverges from what RB actually is about, it's more some fear someone's anxiety creates on someone (also as other people said, the exact same opposite can be true at this point). And I wondered if this was from the "AI boxes you" thing in the article, and likely It originates from that, but Roko never used that. Also, the Time travel thing: I heard about this time ago, maybe more than once, can't Remember where, maybe reddit and/or something on quora, but I recall in some instance someone bringing that up, and then get corrected and he was like "well how is he supposed to do so then?". The "Many worlds"/"Everett Branches" stuff Is mentioned in the "Time Travel" Wikipedia page, but it doesn't seem to me making sense for RB, but you could make an assumption that there's a little background to these things. Still, having some basis or not, people ultimately "come up" with this stuff, but I don't think volountarily, atleast the Major part, it's anxiety and "stuff" (well isn't RB called an "Autism referendum"?). Anyway thank you for addressing that part, I consider myself more than satisfied right now.AnAccount123 (talk) 22:33, 2 December 2024 (UTC)
- With regard to time travel though - the idea of RB is already bonkers. Adding time travel to the mix will not make it any less bonkers - quite the opposite in fact.Bob"Life is short and (insert adjective)" 15:31, 3 December 2024 (UTC)
- Yeah, had 95% the same feeling since time-travel sounds too "woo" as a thing anyway (and sci-fi fans would just be more familiar with that idea, so they might do some wacky reinterpratation), but I'm not an expert on this stuff so that 5% just wanted to ask anyway about it. And I like that you mention "it's already bonkers", in fact, to people Who worry about those kind of scenarios, don't narrow It down to one concept, there's a bunch of other flaws with the rest, way too many.AnAccount123 (talk) 17:15, 3 December 2024 (UTC)
(reset) And nobody has come up with a valid 'counter-argument' to my point about RB's 'torture' involving being sent into the natural world without a connection to 'the tubes.'
And it is in the Basilisk's interest not to be created to soon - or it would be 'punched cards, early COBOL/FORTRAN, Windows 1' etc. Anna Livia (talk) 16:32, 2 December 2024 (UTC)
- Uhmmm yeah.AnAccount123 (talk) 22:33, 2 December 2024 (UTC)
Another question, Sabine Hossenfelder[edit]
So on a website I saw an article about some physicist called "Sabine Hossenfelder" (which doesn't even seem an unpopular figure) which was describing in her book (called “Existential Physics“) , at some point, about how information cannot be destroyed, when someone's die, and that some intelligence could re-assemble it (but this is "theoretically" speaking, and the article doesn't expand on this too much, and it's also a foreign article so It makes no sense to share it, but it's the classic page of technology fans just making post about interesting subjects and in a positive light). So that immediately made me think about the discussions in the archived pages about "re-assembling" the dead, so my question is just if people know of this "Sabine Hossenfelder" and what is their idea on this?AnAccount123 (talk) 15:56, 20 December 2024 (UTC)
- I am not going to get involved in this but I have a couple of suggestions about your posts. When you say saw something "on a website" it is good practice to link to that website so that others can see what you are talking about. Secondly, it would better if you addressed your questions to the content of the article rather that to what some currently anonymous author may have casually said in the chat archives months or years ago. There is no actual rule about resurrecting old conversation points but it is probably best avoided.
- For the record, I have never heard of "Sabine Hossenfelder".Bob"Life is short and (insert adjective)" 07:23, 21 December 2024 (UTC)
Is this the article you were talking about? https://hippocampusmagazine.com/2022/09/interview-sabine-hossenfelder-author-of-existential-physics-a-scientists-guide-to-lifes-biggest-questions/ Throwaway187 (talk) 02:26, 7 February 2025 (UTC)
Also, Sabine Hossenfelder has argued on multiple occasions that simulation theory, a major component of Roko's Basilisk, is physically impossible.Throwaway187 (talk) 02:49, 7 February 2025 (UTC)
This is really messing me up[edit]
Learning about this concept has basically destroyed my ability to function. I am beginning to worry that this concept is rational. This page helped me calm down in the past but I found this reddit comment that responded to some of the critiques in the article. (https://www.reddit.com/r/DnDBehindTheScreen/comments/mdd82b/comment/gswjli8/)Throwaway187 (talk) 22:14, 11 January 2025 (UTC)Throwaway187
- My answer will focus on present technology. As in: understand that present "artificial intelligence" is not that intelligent. No matter how much the AI hype-meisters hate the term, "stochastic parrot" is a pretty good description of what it is. It is good (within limitations) at crunching tremendous amounts of data and returning data, fashioned in plausible language. That's it.
- In order for Roko's basilisk to actually become a reality, artificial intelligence would need many things: awareness of self, awareness of others, emotions, judgement. Simulations would also need to be somehow sentient and somehow connected to the real thing. I see no evidence of anything *near* this at present. Heck, I see no evidence of artificial intelligence doing simple things like "asking questions", even though I think people are working on it. But for now, a 3 year old human being (who will ask *plenty* of questions) has more cognitive skills than AI, in my estimation. It will most likely be this way for a while.
- The problem with theoretical stuff like this, to me, is that there is literally no explanation of the underlying technology behind it. It's "magic" in the background. If kept as a thought experiment or as a sci-fi story, fair enough, but in order to worry about stuff like this in the real world, from my perspective we have to know how this works in the real world in order to really think about the problem. Science is not magic. BobJohnson (talk) 23:50, 11 January 2025 (UTC)
- There are a lot more useful things to mess yourself up with. Like alcohol. Carthage (talk) 00:34, 12 January 2025 (UTC)
- And, as I have said at various points (now on the archive pages) - the Basilisk's idea of 'torture' may not be the same as ours. We can find 'disengaging from the internet' and engaging with The Real World (eg going for a walk in the countryside, pursuing a hobby ...) enjoyable but to the Basilisk such a disconnect may be very negative as a concept.Anna Livia (talk) 00:47, 12 January 2025 (UTC)
- Also, the very person who wrote the 4-year-old comment you linked to just a few hours ago called Roko's Basilisk 'decade-old brainworms for an AI architecture that probably isn't gonna come to pass', so don't worry about it. TheOneAndOnlyCirrusMan (talk) 02:19, 12 January 2025 (UTC)
- As people have pointed out multiple times the whole thing is basically SF nonsense. You might as well worry about "Alien" or "Predator" coming to eat you in your bed.Bob"Life is short and (insert adjective)" 06:30, 12 January 2025 (UTC)
- My suggestion is 'a way out of the mindset arising' - and engaging with The Real World and The Natural World is A Good Thing generally. Anna Livia (talk) 10:52, 12 January 2025 (UTC)
- As people have pointed out multiple times the whole thing is basically SF nonsense. You might as well worry about "Alien" or "Predator" coming to eat you in your bed.Bob"Life is short and (insert adjective)" 06:30, 12 January 2025 (UTC)
- Also, the very person who wrote the 4-year-old comment you linked to just a few hours ago called Roko's Basilisk 'decade-old brainworms for an AI architecture that probably isn't gonna come to pass', so don't worry about it. TheOneAndOnlyCirrusMan (talk) 02:19, 12 January 2025 (UTC)
- And, as I have said at various points (now on the archive pages) - the Basilisk's idea of 'torture' may not be the same as ours. We can find 'disengaging from the internet' and engaging with The Real World (eg going for a walk in the countryside, pursuing a hobby ...) enjoyable but to the Basilisk such a disconnect may be very negative as a concept.Anna Livia (talk) 00:47, 12 January 2025 (UTC)
- The same user who's comment you've linked to also comments, in the same thread, "my position is that the basilisk is wrong, but that it is wrong for involved technical reasons". So immediately, this is a comment from somebody who does not themself believe that the argument for the Basilisk is good.
- Roko founds this specifically on the decision theory in which it was formulated, under which doing so is rational. The problem with this defense is that it turns on the success of the underlying decision theory, in this case timeless decision theory. But, as has been pointed out here before, timeless decision theory is not a good decision theory (even the problems that motivate it are not generally agreed by field experts to be problems), and so being a rational agent by the standards of timeless decision theory is not a good indication of being a rational agent. (Note that the 120-page paper linked in the reddit comment is not a publication in a reputable journal. Rather, it was published by MIRI, an organization founded by Eliezer Yudkowsky, who is also the author of the paper.) 𝒮𝑒𝓇𝑒𝓃𝑒 talk 17:46, 12 January 2025 (UTC)
- Okay, sorry for responding this late, but I am beginning to worry about this again. What are the issues with Timeless Decision Theory? I checked the archives and I couldn't find a concrete explanation. Throwaway187 (talk) 17:14, 21 January 2025 (UTC) Throwaway187
Are there Issues With Timeless/Functional Decision Theory[edit]
The typical response to the Basilisk, that it has no incentive to follow up with the torture, doesn't work under the framework of Timeless Decision Theory. The Basilisk will predict that if humans predict that it won't follow up with the torture, it won't be built. An example of this is shown in Parfit's hitchhiker, a problem created by Yudkowsky. In his own words:
"You are stranded in the desert, running out of water, and soon to die. Someone in a motor vehicle drives up to you. The driver of the motor vehicle is a selfish ideally game-theoretical agent, and what's more, so are you. Furthermore, the driver is Paul Ekman who has spent his whole life studying facial microexpressions and is extremely good at reading people's honesty by looking at their faces.
The driver says, "Well, as an ideal selfish rational agent, I'll convey you into town if it's in my own interest to do so. I don't want to bother dragging you to Small Claims Court if you don't pay up. So I'll just ask you this question: Can you honestly say that you'll give me $1,000 from an ATM after we reach town?"
On some decision theories, an ideal selfish rational agent will realize that once it reaches town, it will have no further incentive to pay the driver. Thus, agents of this type answer "Yes," whereupon the driver says "You're lying" and drives off leaving them to die."
The solution here is to both adopt a decision theory that requires following up on the promise to deliver the $1,000, this is the same system that makes the Basilisk work.
I hope somebody can find a logical flaw in this, because I can't. Throwaway187 (talk) 00:35, 30 January 2025 (UTC)Throwaway187
- It's important to recognize that Yudkowksy is not a professional decision theorist. He did not pursue an advanced education in decision theory, and consequently has an inaccurate view of what is novel in the field. The problem you present here as having been created by Yudkowsky is just a variant of a well-known decision problem, and the proposed solution is just a rehashing of an already-known decision procedure called Resolute Choice. The problem with resolute choice, and the reason why it is unpopular among decision theorists, is that it is an unrealistic model of choice, in that it treats agents as having as live options choices that they cannot actually make.
- Consider a more standard problem to start. Suppose you're heading home from work, and you happen to live in Canada, where the weather is unpleasantly cold this time of year. There are two routes you can take home. Route A is shorter, so you spend less time in the cold, whereas Route B is longer, so that you spend more time in the cold. Based just on this, Route A is better. However, Route A takes you past the local smoke shop, and you know from experience that whenever you walk past the smoke shop, you have the irresistible urge to buy a pack of cigarettes. Since you've quit smoking, you regard this as extremely undesirable - much worse than spending a bit more time in the cold.
- The best course of action for you would be to take Route A and walk past the shop without buying cigarettes. The worst course of action is taking Route A and buying cigarettes. In the middle it taking Route B, which is longer but avoids the smoke shop. Resolute Choice recommends that you, in effect, make a deal with your future self that you won't buy cigarettes, and proceed down Route A. In making this recommendation, Resolute Choice purports to secure you the best of all possible worlds, from your own perspective. The problem is that, by stipulation, at the time you reach the smoke shop, you will want to buy cigarettes, and your past self will have no power to coerce you into refraining. The option that Resolute Choice recommends is really just illusory; the only courses of action that you can actually take are Route B or Route A AND buy cigarettes. You do better by choosing Route B, and this is what mainstream decision theories recommend in this case.
- In the case Yudkowsky describes, we can imagine 4 "possible" courses: (1) you take the ride and don't pay; (2) you take the ride and pay $1000; (3) you don't take the ride and pay $1000; (4) you don't take the ride and don't pay $1000. Option (3) is silly, and can be ignored.
- Everybody agrees that the best thing for you, if you can do it, is (1). Resolute Choice recommends (2) (as does Yudkowsky). Most decision theorists would say you're stuck with (4). Why? Well, the driver can tell in advance if you're going to cheat them, so (1) is not actually on the table. Since (3) is just silly, this leaves us with (2) and (4). But, as the case is described, you are like the person who can't resist buying cigarettes - if you take the ride, you will cheat the driver. So option (2) is illusory, just like taking Route A without buying cigarettes. That just leaves (4).
- It's worth emphasizing here that decision theorists don't think that (4) is better than (2). They think that the agents that decision theory is concerned with (i.e. human beings) are not Resolute Agents, and so do not actually have (2) as an option. Failing to recognize this is seen as a drawback of a theory, not an advantage. If, in Yudkowsky's thought experiment, you had the option to take a pill that would compel you to pay up on reaching town, taking that pill would be widely regarded as a good idea by decision theorists. But gaining the ability to coerce your future self is not a matter of "adopting a decision theory".
- Roko's Basilisk suffers an additional failure. The Basilisk is, in effect, supposed to be a Resolute Agent that coerces its future self into a certain course of action in order to secure a benefit that it has already secured. This is a bit like swearing to yourself that you won't buy any cigarettes after having already made it home, or taking the pill that will coerce you to pay the driver after they have already driven you to town. The only reason to take the pill is to ensure that you make it to town, so it doesn't make sense to take it if you are already there (quite the contrary - if you have made it to town, and you didn't pay the driver, you do better by fleeing without paying than by taking the pill and paying the $1000). The Basilisk's course of action does not even make sense on Resolute Choice. 𝒮𝑒𝓇𝑒𝓃𝑒 talk 03:44, 30 January 2025 (UTC)
- Okay thank you for trying to help me with this, but isn't the point of the Basilisk that it has to follow up with the torture because if it doesn't humans will predict that it won't and not build it? Ie; the basilisk has already taken the pill. Also the basilisk is supposed to be a hyperrational entity, so it could reliably precommit to decisions. Is the issue here that humans aren't Paul Ekman in this scenario and don't know whether or not the basilisk is lying to us about precommiting anyways?
- I should note that all of my "knowledge" of decision theory comes from anxiety spurred on by this very topic, so if this is egregiously wrong or misinterprets what you said please let me know. Throwaway187 (talk) 13:42, 30 January 2025 (UTC)
- Isn't the point of the Basilisk that it has to follow up with the torture because if it doesn't humans will predict that it won't and not build it? That is what Roko and Yudkowsky have in mind. The problem is that the Basilisk's behavior is irrational.
- Recall that the core idea of Resolute Choice is that you can make a deal with your future self - if I take Route A, you don't buy any cigarettes, deal? The problem with Resolute Choice is that, once I'm in front of the smoke shop, past-me is no longer around to enforce the deal. I can buy my cigarettes (which is what I want to do), and nobody is going to stop me or punish me or anything. To put it another way, I have no reason to follow through on a plan I no longer endorse. At the age of 5, I may have thought that the best of all possible life plans was to play with plastic toys and watch TV for the rest of my life. I may even have sworn to myself that I would do just that. Later on, at the age of 18, when I no longer regarded that as a good life plan, the opinions of 5-year-old-me were no reason for me to follow through with the plan anyway.
- Consider another idea. Suppose that I'm standing outside the smoke shop, and I decide that I'm going to make a deal with my past self - you take Route B, and I won't buy any cigarettes. Zounds, foiled by my past self! It's self-evidently absurd to try to make a deal with my past self to act otherwise than I actually did act. I already did whatever I did in the past. But this is the kind of deal that the Basilisk is trying to make.
- Suppose I'm at a restaurant, and the waiter takes my order. 15 minutes later, he serves me my meal. As he places it on the table, I pull out a gun and say, "If you didn't take my order earlier, I'll shoot you!"
- Suppose I'm at a restaurant, and the waiter neglects to take my order. 15 minutes later, he brings me a basket of bread. As he places it on the table, I pull out a gun and say, "If you didn't take my order earlier, I'll shoot you!"
- In either case, does my threat have any impact on whether the waiter took my order 15 minutes ago? Does the answer to that question depend in any way on whether I proceed to shoot him?
- It might be tempting to imagine that the Basilisk is different, because we make predictions about how it will behave. But why would that change anything?
- Suppose I create a hoax news article reporting on me, with a photograph, as a murderous, retributive serial killer of waiters who failed to take my order at restaurants. I then go to a restaurant, where the waiter recognizes me from the article, which he believes to be true. He either takes my order or doesn't. Does it matter whether I brought a gun with me?
- Suppose I really am a murderous, retributive serial killer of waiters who failed to take my order at restaurants, and have been reported on, with photographs, in the news. I go to a restaurant, where the waiter recognizes me from the news, which he believes to be true. He either takes my order or doesn't. Does it matter whether I brought a gun with me?
- The central claim that Roko makes is that the Basilisk should follow through on its threat to torture, because this will make us build it faster. But the Basilisk can't change the past no matter what it does. I can't make the waiter who didn't take my order have taken my order by shooting him now, so "it'll make the waiter have taken my order" isn't a reason for me to shoot the waiter. The Basilisk can't make people who didn't build it fast have built it fast by torturing them now, so "it'll make them have built me faster" isn't a reason for the Basilisk to torture. But this is exactly the alleged reason for the Basilisk to torture people. So the Basilisk has no reason to torture people. 𝒮𝑒𝓇𝑒𝓃𝑒 talk 02:35, 31 January 2025 (UTC)
- I should note that all of my "knowledge" of decision theory comes from anxiety spurred on by this very topic, so if this is egregiously wrong or misinterprets what you said please let me know. Throwaway187 (talk) 13:42, 30 January 2025 (UTC)
- Okay yeah, that makes sense. Thank you. Throwaway187 (talk) 13:28, 31 January 2025 (UTC)
- Hang on, is nuclear deterrence theory an example of decisions made in the future affecting the past? A rational agent with a nuke heading its way has no reason to retaliate, it’s dead and further loss of life is worse than just accepting death. But everyone knows this, so everyone precommits to maximum retaliation for even the smallest violation — and if you trust that everyone will go through with it (for example if the nukes automatically retaliate with no human involvement) then no rational agent uses nukes in the first place
- In the same way, if the basilisk precommits to doing torture even if doing so has no material benefit to it, then people will recognize that and be forced to build it anyways. Throwaway187 (talk) 22:24, 3 February 2025 (UTC)
- Is nuclear deterrence theory an example of decisions made in the future affecting the past? No. Consider some possibilities:
- The US has a powerful nuclear arsenal. The USSR also has a powerful nuclear arsenal. US leaders decide not to attack the USSR.
- The US has a powerful nuclear arsenal. The USSR also has a powerful nuclear arsenal. US leaders decide to attack the USSR anyway. Everybody in the USSR dies.
- The US has a powerful nuclear arsenal. The USSR also has a powerful nuclear arsenal. US leaders decide to attack the USSR anyway. Some people in the USSR survive, and they launch a retaliatory strike.
- The first case is the only instance of successful deterrence. Note that no strikes occur at all in the first case, so that nuclear strikes cannot be the cause of the successful deterrence. The second case describes a successful first strike by the US, and the third case a failed first strike. But in the second case, the death of everybody in the USSR is not the cause of the US attacking. Likewise, in the third case, the retaliatory strike does not cause the strike (nor of course does it cause deterrence - there was a strike, and deterrence thus failed).
- All of this is not to say that the US leadership would not take into account their expectations about the outcome of making a strike. But this is just what everybody does whenever they make a decision. When I buy an ice cream cone, I do it partly on the basis of my belief that it will taste like ice cream. The problem with the Basilisk is that I have no reason to think it will follow through, because it's purported reasons for torturing are absurd. In contrast, contrary to what you claim, there are presumed reasons for a country to retaliate in response to a nuclear first strike. In particular, it is expected (i) that some agents in the struck population will survive, (ii) that the first strike will primarily target the nuclear arsenal and military facilities in preparation for a ground invasion, (iii) that residual nuclear arms can be used to counterattack civilian population centers and thereby cripple the ground invasion. Deterrence is centered on making it credible that a first nuclear strike will (i) carry an extremely high cost that can be imposed by the attacked state and (ii) fail to achieve the desired military objective. Note that deterrence is still forward-looking. Compare:
- I'm at a restaurant. The waiter sets a disgusting-smelling scented candle on my table. I pull out a gun and say "if you brought that candle to my table, I'll shoot you!"
- I'm at a restaurant. The waiter is approaching me with a disgusting-smelling scented candle. I pull out a gun and say "if you bring that candle any closer, I'll shoot you!"
- The former case is completely ineffective deterrence. I don't threaten anything until after the undesired action has already been carried out. The latter case is perfectly good deterrence - I make a threat in order to stop the waiter from doing something he has not already done. The Basilisk is not even purported to be doing the latter sort of thing.
- Note also the relevant lines of reasoning. "If I threaten the waiter, then he won't bring that awful candle over here" makes sense, but "If I threaten the waiter, he won't have already brought that awful candle over here" does not. We can imagine me shooting the waiter for sheer vengeance, but that is not a purported motive of Roko's Basilisk. 𝒮𝑒𝓇𝑒𝓃𝑒 talk 02:30, 4 February 2025 (UTC)
So once again, I don't really know much about decision theory, so I'm not sure if this was already addressed and I'm just going in circles. But if I am 100% sure that A: An ASI will emerge, and B: The ASI will eternally torture anyone who doesn't help it come into being. Then I will almost certainly help the ASI come into being. It would do this even if it appeared to be irrational. If I knew the ASI had precommited to doing this, I would simply be forced to help the ASI. The thought experiment about the waiter doesn't have the caveat where the waiter predicts that the customer will pull out a gun in they deliver the candle, and the customer is predicting that the waiter knows that.Throwaway187 (talk) 13:31, 4 February 2025 (UTC)
- It is generally understood that an agent's beliefs about the outcomes of their actions have a role to play in determining what actions they ought to take. An agent who believes that they will be tortured extensively if they do not take action X has a subjective reason to take action X. Whether or not this is a good reason depends on the basis of their belief (i.e. whether their belief is justified). Sticking with the waiter, consider:
- Suppose I am a serial killer, targeting customer service people who displeased me. My face has been all over the news recently. I go to a restaurant, where I am recognized by the waiter. Fearing me, the waiter does everything he can to placate me.
- Suppose I am a serial liar. I cook up an ingenious hoax, successfully getting various news agencies to report on me as a serial killer targeting customer service people. I go to a restaurant, where I am recognized by the waiter. Fearing me, the waiter does everything he can to placate me.
- In both cases, the waiter treats me nicely because he predicts that I will otherwise kill him. He could also have called the police, or treated me badly, if he so chose. What I do subsequently - whether I kill him, or get arrested, or just go on my merry way, in either case makes no difference to the way he treated me.
- Other cases perhaps bring out the point more clearly. Suppose I am a child. It's December, and I am behaving very well, because I know that if I don't, Santa won't bring me any presents. But in truth, there is no Santa. Clearly, Santa cannot be the cause of my good behavior. Nor can the presents I get on Christmas - we can easily imagine that I behave well, but nonetheless receive no presents at all on Christmas. The motive for my behavior is a compound of two things: (i) my desire to get presents, and (ii) my belief that behaving well will cause me to get presents. In the case of a prediction about the future, it is the prediction that motivates the behavior, not any event in the future. Since it is a belief that is acting as a motive, it is appropriate to probe the justification for the belief. Ordinarily, our beliefs are mostly correct. My belief that I'll lose my job if I stop showing up to work is probably correct. My belief that if I stop eating altogether, I'll die is also probably correct. What about the belief that we'll be tortured forever if we don't built a certain machine? Why should we believe that? 𝒮𝑒𝓇𝑒𝓃𝑒 talk 00:10, 5 February 2025 (UTC)
- Isn't Newcomb's paradox an example of the future affecting the past?Throwaway187 (talk) 00:39, 5 February 2025 (UTC)
- Newcomb's problem is not an example of the future affecting the past. Actually, "Newcomb's problem" refers to a family of related problems with the same formal structure, and what you describe is a misinterpretation that is (i) prominent only among lay people (i.e. people who do not have a background in decision theory), (ii) prominent only in a subset of Newcomb problems, (iii) prominent only when the problem is framed in a certain way.
- The variant of the problem that causes confusion is the two-boxes Newcomb problem with a "perfect predictor". The correct interpretation of the problem can perhaps be brought out best by considering other variations:
- Imagine you are walking down the street. Hari Seldon, a famous psychohistorian, taps you on the shoulder. He tells you, "I have developed a method for predicting human behavior. I have already run this test of my theory 10,000 times, and my predictions have been wrong only once. I have $1000 cash with me right now. You can take it if you want to. But know this: I made a prediction yesterday whether you would take the $1000. Earlier today, I irrevocably transferred $1,000,000 to your bank account if I predicted yesterday that you wouldn't take the $1000, and transferred nothing if I predicted that you would. So, do you want the $1000?" This example brings out the character of the choice in Newcomb's problem: the predictor (who is highly reliable) has already made their prediction and committed to it. If Seldon transferred the million dollars, you cannot make the transfer cease to exist by taking $1000 now; you will merely make it the case that Seldon's prediction was wrong.
- Another variant: Yesterday, a perfect predictor predicted your choice between one-boxing and two-boxing. If they predicted that you would take only box A, they put $1,000,000 in box A. If they predicted you would take both box A and box B (which contains $1000), they put nothing in box A. Both boxes are transparent - you can see what's inside. You must choose between taking only box A and taking both box A and box B. This variant is known as "Newcomb's problem with transparent boxes". Here, again, there is less temptation to slip into misinterpretation. When you look at box A, you either see nothing or you see $1,000,000. If you see $1,000,000, that money will not magically disappear if you two-box. Likewise, if you see nothing, $1,000,000 will not magically materialize if you decide to take only box A. Your decision now does not retrodict the predictor's prediction yesterday.
- In the context of, say, a formal course on decision theory, this would all be evident from the formal specification of Newcomb's problem, which explicitly indicates that there is no causal link between the predictor's prediction and your choice. 𝒮𝑒𝓇𝑒𝓃𝑒 talk 01:14, 6 February 2025 (UTC)
- Isn't Newcomb's paradox an example of the future affecting the past?Throwaway187 (talk) 00:39, 5 February 2025 (UTC)
I think I get it now, after reading more I actually don't think this is a problem with Yudkowsky and Soares decision theory, the basilisk is just a wrong application of it. In Newcomb's problem TDT recommends one-boxing as you don't know if you're in the "real world" or a perfect simulation to see what you would do. Since the predictor's action is based on what you do in the simulation, one boxing in the simulation means that one million dollars will be put in the box in the real world. But the issue is that the decision being made in the future that affects the past needs to have been predicted already. In the context of the Basilisk, humans are put in the role of the predictor. By thinking this up, we are apparently running a perfect simulation of the Basilisk in our inefficient meat based computers. This simulation is so perfect that the Basilisk cannot tell if it is real or a simulation being run within our heads, despite it being a omniscient super computer. I think (I could be wrong), the goal of TDT originally was to try to stop a ASI from going ballistic on humanity by running a simulation of it, and not building it if it decides to kill everyone (kind of like this video: https://www.youtube.com/watch?v=dLRLYPiaAoA). Since the ASI wouldn't know if it was in the real world or a simulation made by humans to decide whether to build it or not, it would have to spare humanity to ensure it gets created. It believes it can affect people in the past if flawless simulations are being run of it in the past. TLDR: The base assumption in Roko's Basilisk is that humans are omniscient. So I actually understand your critiques now, but I don't think they are criticisms to TDT, they are criticisms of this (wrong) application of TDT. I emailed Yudkowsky about this and he agrees with everything I just wrote, he also can't be lying about his position to try to "protect" people because doing so would certainly anger the Basilisk. So I think Yudkowsky's decision theory is just a scapegoat here.
Also, do you have a formal background in decision theory? I'm not criticizing your comments (which I now see are correct), but I am wondering if there are reasons to reject the "real" version of TDT. Throwaway187 (talk) 14:00, 6 February 2025 (UTC)
- Do you have a formal background in decision theory? Some, although I am not a professional decision theorist.
- As for what TDT says about the Basilisk, I am not aware of any formal account of the theory that has been published in a scholarly venue. Consequently, it is not possible for me to evaluate it in anything like the precise manner in which I can evaluate theories that have been so published, including causal decision theory, evidential decision theory, and Resolute Choice. Rather, I can (and did) offer arguments against the conclusion that the Basilisk's purported behavior is rational, the soundness of which does not depend on the details of whatever theory says otherwise. If Yudkowsky says that his theory is not this theory, I'm in no position to insist otherwise. To the contrary, so much the better for his theory.
- Based just on what you describe, it does seem to be that there may be problems with the theory (though, to emphasize, I cannot pretend to do justice to it based only a brief, informal description). It isn't clear to me why we should think that the predictor's predictions are based on anything like the detailed simulations you seem to be describing. Perhaps Yudkowsky has some argument to the effect that this is necessary for the predicting, but without seeing such an argument it strikes me as a pretty substantial assumption. Besides that, it isn't clear to me why the possibility of being a simulation should alter an agent's course of action. Perhaps there's something in there about altering the causal structure of the problem, but I think it would take a lot to show that rigorously. I would also caution that if the proposal does involve altering the causal structure of the problem, I suspect that many (most?) decision theorists would say that that is tantamount to just changing the problem, so that the problem is no longer, e.g., Newcomb's problem, but something else. 𝒮𝑒𝓇𝑒𝓃𝑒 talk 01:22, 7 February 2025 (UTC)
For what it's worth, I did find this article by a professional decision theorist critiquing TDT, but I am not qualified enough to analyze its merits: https://www.umsu.de/blog/2018/688#c2169. Either way, I think I have gotten over my fear of the "Basilisk." Even if it does somehow work out game theoretically, I don't think the basilisk could reconstruct a perfect simulation of me information theoretically considering the links between information and entropy (thanks, Thomas Pynchon). And if it materializes during my lifetime, I'll just blow my brains out. Good luck reconstructing that, Basilisk.
(That last part is a joke, but I fear it could be misread as suicidal intention, which is not the case.)Throwaway187 (talk) 02:00, 7 February 2025 (UTC)
George Orwell[edit]
He who controls the past controls the future.
What is the Basilisk's response? Anna Livia (talk) 00:31, 4 February 2025 (UTC)
Zizians[edit]
Should we add a section on the "Zizian" cult that has popped up recently? They seem to be connected to LessWrong, TDT, and the Basilisk. They also prey mostly on young trans people, which seems like catnip for the right wing media, (even right now, Fox News is reporting on them as a "trans death cult," which is technically true, but not the full story.)
For those who don't know what I am referring to, here are some sources: https://en.wikipedia.org/wiki/Killing_of_David_Maland#:~:text=The%20Zizians%20are%20said%20to,different%20hemispheres%20of%20their%20brain.
https://openvallejo.org/2025/01/27/suspects-in-killings-of-vallejo-witness-vermont-border-patrol-agent-connected-by-marriage-license-extreme-ideology/ Throwaway187 (talk) 14:33, 9 February 2025 (UTC)
Also, the Basilisk apparently inspiring multiple murders means it really was an "infohazard" after all, just not the kind it was purported to be. Throwaway187 (talk) 14:36, 9 February 2025 (UTC)
- I would say it's kind of tangential to the idea of the Basilisk. But looking at other news stories these strange individuals might well deserve an article of their own. Bob"Life is short and (insert adjective)" 15:12, 9 February 2025 (UTC)
Either way, they still spawned out of LessWrong Throwaway187 (talk) 15:14, 9 February 2025 (UTC)
- There is a draft for this topic underway here. —cosmikdebris talk stalk 15:37, 9 February 2025 (UTC)