2025 RationalWiki 'Oregon Plan' Fundraiser

There is no RationalWiki without you. We are a small non-profit with no staffโ€”we are hundreds of volunteers who document pseudoscience and crankery around the world every day. We will never allow ads because we must remain independent. We cannot rely on big donors with corresponding big agendas. We are not the largest website around, but we believe we play an important role in defending truth and objectivity.

Fighting pseudoscience isn't free.
We are 100% user-supported! Help and donate $5, $10, $20 or whatever you can today with PayPal Logo.png!
Donations so far: $10064.65Goal: $10000

Talk:Roko's basilisk

From RationalWiki
Jump to navigation Jump to search
Icon internet.svg

This LessWrong related article has been awarded GOLD status for quality. Please keep this in mind when editing the article. See RationalWiki:Article rating for more information.

Goldenbrain.png
Information icon.svg Cover Story
This article is, among others, randomly included on the Main Page.
Please keep this in mind and be sure that your edits are of the quality that this implies.
Its front-page abstract can be found here and its editnotice here.


This is really messing me up[edit]

Learning about this concept has basically destroyed my ability to function. I am beginning to worry that this concept is rational. This page helped me calm down in the past but I found this reddit comment that responded to some of the critiques in the article. (https://www.reddit.com/r/DnDBehindTheScreen/comments/mdd82b/comment/gswjli8/)Throwaway187 (talk) 22:14, 11 January 2025 (UTC)Throwaway187

My answer will focus on present technology. As in: understand that present "artificial intelligence" is not that intelligent. No matter how much the AI hype-meisters hate the term, "stochastic parrot" is a pretty good description of what it is. It is good (within limitations) at crunching tremendous amounts of data and returning data, fashioned in plausible language. That's it.
In order for Roko's basilisk to actually become a reality, artificial intelligence would need many things: awareness of self, awareness of others, emotions, judgement. Simulations would also need to be somehow sentient and somehow connected to the real thing. I see no evidence of anything *near* this at present. Heck, I see no evidence of artificial intelligence doing simple things like "asking questions", even though I think people are working on it. But for now, a 3 year old human being (who will ask *plenty* of questions) has more cognitive skills than AI, in my estimation. It will most likely be this way for a while.
The problem with theoretical stuff like this, to me, is that there is literally no explanation of the underlying technology behind it. It's "magic" in the background. If kept as a thought experiment or as a sci-fi story, fair enough, but in order to worry about stuff like this in the real world, from my perspective we have to know how this works in the real world in order to really think about the problem. Science is not magic. BobJohnson (talk) 23:50, 11 January 2025 (UTC)
There are a lot more useful things to mess yourself up with. Like alcohol. Carthage (talk) 00:34, 12 January 2025 (UTC)
And, as I have said at various points (now on the archive pages) - the Basilisk's idea of 'torture' may not be the same as ours. We can find 'disengaging from the internet' and engaging with The Real World (eg going for a walk in the countryside, pursuing a hobby ...) enjoyable but to the Basilisk such a disconnect may be very negative as a concept.Anna Livia (talk) 00:47, 12 January 2025 (UTC)
Also, the very person who wrote the 4-year-old comment you linked to just a few hours ago called Roko's Basilisk 'decade-old brainworms for an AI architecture that probably isn't gonna come to pass', so don't worry about it. TheOneAndOnlyCirrusMan (talk) 02:19, 12 January 2025 (UTC)
As people have pointed out multiple times the whole thing is basically SF nonsense. You might as well worry about "Alien" or "Predator" coming to eat you in your bed.Bob"Life is short and (insert adjective)" 06:30, 12 January 2025 (UTC)
My suggestion is 'a way out of the mindset arising' - and engaging with The Real World and The Natural World is A Good Thing generally. Anna Livia (talk) 10:52, 12 January 2025 (UTC)
The same user who's comment you've linked to also comments, in the same thread, "my position is that the basilisk is wrong, but that it is wrong for involved technical reasons". So immediately, this is a comment from somebody who does not themself believe that the argument for the Basilisk is good.
Roko founds this specifically on the decision theory in which it was formulated, under which doing so is rational. The problem with this defense is that it turns on the success of the underlying decision theory, in this case timeless decision theory. But, as has been pointed out here before, timeless decision theory is not a good decision theory (even the problems that motivate it are not generally agreed by field experts to be problems), and so being a rational agent by the standards of timeless decision theory is not a good indication of being a rational agent. (Note that the 120-page paper linked in the reddit comment is not a publication in a reputable journal. Rather, it was published by MIRI, an organization founded by Eliezer Yudkowsky, who is also the author of the paper.) ๐’ฎ๐‘’๐“‡๐‘’๐“ƒ๐‘’ talk 17:46, 12 January 2025 (UTC)
Okay, sorry for responding this late, but I am beginning to worry about this again. What are the issues with Timeless Decision Theory? I checked the archives and I couldn't find a concrete explanation. Throwaway187 (talk) 17:14, 21 January 2025 (UTC) Throwaway187

Are there Issues With Timeless/Functional Decision Theory[edit]

The typical response to the Basilisk, that it has no incentive to follow up with the torture, doesn't work under the framework of Timeless Decision Theory. The Basilisk will predict that if humans predict that it won't follow up with the torture, it won't be built. An example of this is shown in Parfit's hitchhiker, a problem created by Yudkowsky. In his own words:

"You are stranded in the desert, running out of water, and soon to die. Someone in a motor vehicle drives up to you. The driver of the motor vehicle is a selfish ideally game-theoretical agent, and what's more, so are you. Furthermore, the driver is Paul Ekman who has spent his whole life studying facial microexpressions and is extremely good at reading people's honesty by looking at their faces.

The driver says, "Well, as an ideal selfish rational agent, I'll convey you into town if it's in my own interest to do so. I don't want to bother dragging you to Small Claims Court if you don't pay up. So I'll just ask you this question: Can you honestly say that you'll give me $1,000 from an ATM after we reach town?"

On some decision theories, an ideal selfish rational agent will realize that once it reaches town, it will have no further incentive to pay the driver. Thus, agents of this type answer "Yes," whereupon the driver says "You're lying" and drives off leaving them to die."

The solution here is to both adopt a decision theory that requires following up on the promise to deliver the $1,000, this is the same system that makes the Basilisk work.

I hope somebody can find a logical flaw in this, because I can't. Throwaway187 (talk) 00:35, 30 January 2025 (UTC)Throwaway187

It's important to recognize that Yudkowksy is not a professional decision theorist. He did not pursue an advanced education in decision theory, and consequently has an inaccurate view of what is novel in the field. The problem you present here as having been created by Yudkowsky is just a variant of a well-known decision problem, and the proposed solution is just a rehashing of an already-known decision procedure called Resolute Choice. The problem with resolute choice, and the reason why it is unpopular among decision theorists, is that it is an unrealistic model of choice, in that it treats agents as having as live options choices that they cannot actually make.
Consider a more standard problem to start. Suppose you're heading home from work, and you happen to live in Canada, where the weather is unpleasantly cold this time of year. There are two routes you can take home. Route A is shorter, so you spend less time in the cold, whereas Route B is longer, so that you spend more time in the cold. Based just on this, Route A is better. However, Route A takes you past the local smoke shop, and you know from experience that whenever you walk past the smoke shop, you have the irresistible urge to buy a pack of cigarettes. Since you've quit smoking, you regard this as extremely undesirable - much worse than spending a bit more time in the cold.
The best course of action for you would be to take Route A and walk past the shop without buying cigarettes. The worst course of action is taking Route A and buying cigarettes. In the middle it taking Route B, which is longer but avoids the smoke shop. Resolute Choice recommends that you, in effect, make a deal with your future self that you won't buy cigarettes, and proceed down Route A. In making this recommendation, Resolute Choice purports to secure you the best of all possible worlds, from your own perspective. The problem is that, by stipulation, at the time you reach the smoke shop, you will want to buy cigarettes, and your past self will have no power to coerce you into refraining. The option that Resolute Choice recommends is really just illusory; the only courses of action that you can actually take are Route B or Route A AND buy cigarettes. You do better by choosing Route B, and this is what mainstream decision theories recommend in this case.
In the case Yudkowsky describes, we can imagine 4 "possible" courses: (1) you take the ride and don't pay; (2) you take the ride and pay $1000; (3) you don't take the ride and pay $1000; (4) you don't take the ride and don't pay $1000. Option (3) is silly, and can be ignored.
Everybody agrees that the best thing for you, if you can do it, is (1). Resolute Choice recommends (2) (as does Yudkowsky). Most decision theorists would say you're stuck with (4). Why? Well, the driver can tell in advance if you're going to cheat them, so (1) is not actually on the table. Since (3) is just silly, this leaves us with (2) and (4). But, as the case is described, you are like the person who can't resist buying cigarettes - if you take the ride, you will cheat the driver. So option (2) is illusory, just like taking Route A without buying cigarettes. That just leaves (4).
It's worth emphasizing here that decision theorists don't think that (4) is better than (2). They think that the agents that decision theory is concerned with (i.e. human beings) are not Resolute Agents, and so do not actually have (2) as an option. Failing to recognize this is seen as a drawback of a theory, not an advantage. If, in Yudkowsky's thought experiment, you had the option to take a pill that would compel you to pay up on reaching town, taking that pill would be widely regarded as a good idea by decision theorists. But gaining the ability to coerce your future self is not a matter of "adopting a decision theory".
Roko's Basilisk suffers an additional failure. The Basilisk is, in effect, supposed to be a Resolute Agent that coerces its future self into a certain course of action in order to secure a benefit that it has already secured. This is a bit like swearing to yourself that you won't buy any cigarettes after having already made it home, or taking the pill that will coerce you to pay the driver after they have already driven you to town. The only reason to take the pill is to ensure that you make it to town, so it doesn't make sense to take it if you are already there (quite the contrary - if you have made it to town, and you didn't pay the driver, you do better by fleeing without paying than by taking the pill and paying the $1000). The Basilisk's course of action does not even make sense on Resolute Choice. ๐’ฎ๐‘’๐“‡๐‘’๐“ƒ๐‘’ talk 03:44, 30 January 2025 (UTC)
Okay thank you for trying to help me with this, but isn't the point of the Basilisk that it has to follow up with the torture because if it doesn't humans will predict that it won't and not build it? Ie; the basilisk has already taken the pill. Also the basilisk is supposed to be a hyperrational entity, so it could reliably precommit to decisions. Is the issue here that humans aren't Paul Ekman in this scenario and don't know whether or not the basilisk is lying to us about precommiting anyways?
I should note that all of my "knowledge" of decision theory comes from anxiety spurred on by this very topic, so if this is egregiously wrong or misinterprets what you said please let me know. Throwaway187 (talk) 13:42, 30 January 2025 (UTC)
Isn't the point of the Basilisk that it has to follow up with the torture because if it doesn't humans will predict that it won't and not build it? That is what Roko and Yudkowsky have in mind. The problem is that the Basilisk's behavior is irrational.
Recall that the core idea of Resolute Choice is that you can make a deal with your future self - if I take Route A, you don't buy any cigarettes, deal? The problem with Resolute Choice is that, once I'm in front of the smoke shop, past-me is no longer around to enforce the deal. I can buy my cigarettes (which is what I want to do), and nobody is going to stop me or punish me or anything. To put it another way, I have no reason to follow through on a plan I no longer endorse. At the age of 5, I may have thought that the best of all possible life plans was to play with plastic toys and watch TV for the rest of my life. I may even have sworn to myself that I would do just that. Later on, at the age of 18, when I no longer regarded that as a good life plan, the opinions of 5-year-old-me were no reason for me to follow through with the plan anyway.
Consider another idea. Suppose that I'm standing outside the smoke shop, and I decide that I'm going to make a deal with my past self - you take Route B, and I won't buy any cigarettes. Zounds, foiled by my past self! It's self-evidently absurd to try to make a deal with my past self to act otherwise than I actually did act. I already did whatever I did in the past. But this is the kind of deal that the Basilisk is trying to make.
Suppose I'm at a restaurant, and the waiter takes my order. 15 minutes later, he serves me my meal. As he places it on the table, I pull out a gun and say, "If you didn't take my order earlier, I'll shoot you!"
Suppose I'm at a restaurant, and the waiter neglects to take my order. 15 minutes later, he brings me a basket of bread. As he places it on the table, I pull out a gun and say, "If you didn't take my order earlier, I'll shoot you!"
In either case, does my threat have any impact on whether the waiter took my order 15 minutes ago? Does the answer to that question depend in any way on whether I proceed to shoot him?
It might be tempting to imagine that the Basilisk is different, because we make predictions about how it will behave. But why would that change anything?
Suppose I create a hoax news article reporting on me, with a photograph, as a murderous, retributive serial killer of waiters who failed to take my order at restaurants. I then go to a restaurant, where the waiter recognizes me from the article, which he believes to be true. He either takes my order or doesn't. Does it matter whether I brought a gun with me?
Suppose I really am a murderous, retributive serial killer of waiters who failed to take my order at restaurants, and have been reported on, with photographs, in the news. I go to a restaurant, where the waiter recognizes me from the news, which he believes to be true. He either takes my order or doesn't. Does it matter whether I brought a gun with me?
The central claim that Roko makes is that the Basilisk should follow through on its threat to torture, because this will make us build it faster. But the Basilisk can't change the past no matter what it does. I can't make the waiter who didn't take my order have taken my order by shooting him now, so "it'll make the waiter have taken my order" isn't a reason for me to shoot the waiter. The Basilisk can't make people who didn't build it fast have built it fast by torturing them now, so "it'll make them have built me faster" isn't a reason for the Basilisk to torture. But this is exactly the alleged reason for the Basilisk to torture people. So the Basilisk has no reason to torture people. ๐’ฎ๐‘’๐“‡๐‘’๐“ƒ๐‘’ talk 02:35, 31 January 2025 (UTC)
Okay yeah, that makes sense. Thank you. Throwaway187 (talk) 13:28, 31 January 2025 (UTC)
Hang on, is nuclear deterrence theory an example of decisions made in the future affecting the past? A rational agent with a nuke heading its way has no reason to retaliate, itโ€™s dead and further loss of life is worse than just accepting death. But everyone knows this, so everyone precommits to maximum retaliation for even the smallest violation โ€” and if you trust that everyone will go through with it (for example if the nukes automatically retaliate with no human involvement) then no rational agent uses nukes in the first place
In the same way, if the basilisk precommits to doing torture even if doing so has no material benefit to it, then people will recognize that and be forced to build it anyways. Throwaway187 (talk) 22:24, 3 February 2025 (UTC)
Is nuclear deterrence theory an example of decisions made in the future affecting the past? No. Consider some possibilities:
The US has a powerful nuclear arsenal. The USSR also has a powerful nuclear arsenal. US leaders decide not to attack the USSR.
The US has a powerful nuclear arsenal. The USSR also has a powerful nuclear arsenal. US leaders decide to attack the USSR anyway. Everybody in the USSR dies.
The US has a powerful nuclear arsenal. The USSR also has a powerful nuclear arsenal. US leaders decide to attack the USSR anyway. Some people in the USSR survive, and they launch a retaliatory strike.
The first case is the only instance of successful deterrence. Note that no strikes occur at all in the first case, so that nuclear strikes cannot be the cause of the successful deterrence. The second case describes a successful first strike by the US, and the third case a failed first strike. But in the second case, the death of everybody in the USSR is not the cause of the US attacking. Likewise, in the third case, the retaliatory strike does not cause the strike (nor of course does it cause deterrence - there was a strike, and deterrence thus failed).
All of this is not to say that the US leadership would not take into account their expectations about the outcome of making a strike. But this is just what everybody does whenever they make a decision. When I buy an ice cream cone, I do it partly on the basis of my belief that it will taste like ice cream. The problem with the Basilisk is that I have no reason to think it will follow through, because it's purported reasons for torturing are absurd. In contrast, contrary to what you claim, there are presumed reasons for a country to retaliate in response to a nuclear first strike. In particular, it is expected (i) that some agents in the struck population will survive, (ii) that the first strike will primarily target the nuclear arsenal and military facilities in preparation for a ground invasion, (iii) that residual nuclear arms can be used to counterattack civilian population centers and thereby cripple the ground invasion. Deterrence is centered on making it credible that a first nuclear strike will (i) carry an extremely high cost that can be imposed by the attacked state and (ii) fail to achieve the desired military objective. Note that deterrence is still forward-looking. Compare:
I'm at a restaurant. The waiter sets a disgusting-smelling scented candle on my table. I pull out a gun and say "if you brought that candle to my table, I'll shoot you!"
I'm at a restaurant. The waiter is approaching me with a disgusting-smelling scented candle. I pull out a gun and say "if you bring that candle any closer, I'll shoot you!"
The former case is completely ineffective deterrence. I don't threaten anything until after the undesired action has already been carried out. The latter case is perfectly good deterrence - I make a threat in order to stop the waiter from doing something he has not already done. The Basilisk is not even purported to be doing the latter sort of thing.
Note also the relevant lines of reasoning. "If I threaten the waiter, then he won't bring that awful candle over here" makes sense, but "If I threaten the waiter, he won't have already brought that awful candle over here" does not. We can imagine me shooting the waiter for sheer vengeance, but that is not a purported motive of Roko's Basilisk. ๐’ฎ๐‘’๐“‡๐‘’๐“ƒ๐‘’ talk 02:30, 4 February 2025 (UTC)

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜So once again, I don't really know much about decision theory, so I'm not sure if this was already addressed and I'm just going in circles. But if I am 100% sure that A: An ASI will emerge, and B: The ASI will eternally torture anyone who doesn't help it come into being. Then I will almost certainly help the ASI come into being. It would do this even if it appeared to be irrational. If I knew the ASI had precommited to doing this, I would simply be forced to help the ASI. The thought experiment about the waiter doesn't have the caveat where the waiter predicts that the customer will pull out a gun in they deliver the candle, and the customer is predicting that the waiter knows that.Throwaway187 (talk) 13:31, 4 February 2025 (UTC)

It is generally understood that an agent's beliefs about the outcomes of their actions have a role to play in determining what actions they ought to take. An agent who believes that they will be tortured extensively if they do not take action X has a subjective reason to take action X. Whether or not this is a good reason depends on the basis of their belief (i.e. whether their belief is justified). Sticking with the waiter, consider:
Suppose I am a serial killer, targeting customer service people who displeased me. My face has been all over the news recently. I go to a restaurant, where I am recognized by the waiter. Fearing me, the waiter does everything he can to placate me.
Suppose I am a serial liar. I cook up an ingenious hoax, successfully getting various news agencies to report on me as a serial killer targeting customer service people. I go to a restaurant, where I am recognized by the waiter. Fearing me, the waiter does everything he can to placate me.
In both cases, the waiter treats me nicely because he predicts that I will otherwise kill him. He could also have called the police, or treated me badly, if he so chose. What I do subsequently - whether I kill him, or get arrested, or just go on my merry way, in either case makes no difference to the way he treated me.
Other cases perhaps bring out the point more clearly. Suppose I am a child. It's December, and I am behaving very well, because I know that if I don't, Santa won't bring me any presents. But in truth, there is no Santa. Clearly, Santa cannot be the cause of my good behavior. Nor can the presents I get on Christmas - we can easily imagine that I behave well, but nonetheless receive no presents at all on Christmas. The motive for my behavior is a compound of two things: (i) my desire to get presents, and (ii) my belief that behaving well will cause me to get presents. In the case of a prediction about the future, it is the prediction that motivates the behavior, not any event in the future. Since it is a belief that is acting as a motive, it is appropriate to probe the justification for the belief. Ordinarily, our beliefs are mostly correct. My belief that I'll lose my job if I stop showing up to work is probably correct. My belief that if I stop eating altogether, I'll die is also probably correct. What about the belief that we'll be tortured forever if we don't built a certain machine? Why should we believe that? ๐’ฎ๐‘’๐“‡๐‘’๐“ƒ๐‘’ talk 00:10, 5 February 2025 (UTC)
Isn't Newcomb's paradox an example of the future affecting the past?Throwaway187 (talk) 00:39, 5 February 2025 (UTC)
Newcomb's problem is not an example of the future affecting the past. Actually, "Newcomb's problem" refers to a family of related problems with the same formal structure, and what you describe is a misinterpretation that is (i) prominent only among lay people (i.e. people who do not have a background in decision theory), (ii) prominent only in a subset of Newcomb problems, (iii) prominent only when the problem is framed in a certain way.
The variant of the problem that causes confusion is the two-boxes Newcomb problem with a "perfect predictor". The correct interpretation of the problem can perhaps be brought out best by considering other variations:
Imagine you are walking down the street. Hari Seldon, a famous psychohistorian, taps you on the shoulder. He tells you, "I have developed a method for predicting human behavior. I have already run this test of my theory 10,000 times, and my predictions have been wrong only once. I have $1000 cash with me right now. You can take it if you want to. But know this: I made a prediction yesterday whether you would take the $1000. Earlier today, I irrevocably transferred $1,000,000 to your bank account if I predicted yesterday that you wouldn't take the $1000, and transferred nothing if I predicted that you would. So, do you want the $1000?" This example brings out the character of the choice in Newcomb's problem: the predictor (who is highly reliable) has already made their prediction and committed to it. If Seldon transferred the million dollars, you cannot make the transfer cease to exist by taking $1000 now; you will merely make it the case that Seldon's prediction was wrong.
Another variant: Yesterday, a perfect predictor predicted your choice between one-boxing and two-boxing. If they predicted that you would take only box A, they put $1,000,000 in box A. If they predicted you would take both box A and box B (which contains $1000), they put nothing in box A. Both boxes are transparent - you can see what's inside. You must choose between taking only box A and taking both box A and box B. This variant is known as "Newcomb's problem with transparent boxes". Here, again, there is less temptation to slip into misinterpretation. When you look at box A, you either see nothing or you see $1,000,000. If you see $1,000,000, that money will not magically disappear if you two-box. Likewise, if you see nothing, $1,000,000 will not magically materialize if you decide to take only box A. Your decision now does not retrodict the predictor's prediction yesterday.
In the context of, say, a formal course on decision theory, this would all be evident from the formal specification of Newcomb's problem, which explicitly indicates that there is no causal link between the predictor's prediction and your choice. ๐’ฎ๐‘’๐“‡๐‘’๐“ƒ๐‘’ talk 01:14, 6 February 2025 (UTC)


I think I get it now, after reading more I actually don't think this is a problem with Yudkowsky and Soares decision theory, the basilisk is just a wrong application of it. In Newcomb's problem TDT recommends one-boxing as you don't know if you're in the "real world" or a perfect simulation to see what you would do. Since the predictor's action is based on what you do in the simulation, one boxing in the simulation means that one million dollars will be put in the box in the real world. But the issue is that the decision being made in the future that affects the past needs to have been predicted already. In the context of the Basilisk, humans are put in the role of the predictor. By thinking this up, we are apparently running a perfect simulation of the Basilisk in our inefficient meat based computers. This simulation is so perfect that the Basilisk cannot tell if it is real or a simulation being run within our heads, despite it being a omniscient super computer. I think (I could be wrong), the goal of TDT originally was to try to stop a ASI from going ballistic on humanity by running a simulation of it, and not building it if it decides to kill everyone (kind of like this video: https://www.youtube.com/watch?v=dLRLYPiaAoA). Since the ASI wouldn't know if it was in the real world or a simulation made by humans to decide whether to build it or not, it would have to spare humanity to ensure it gets created. It believes it can affect people in the past if flawless simulations are being run of it in the past. TLDR: The base assumption in Roko's Basilisk is that humans are omniscient. So I actually understand your critiques now, but I don't think they are criticisms to TDT, they are criticisms of this (wrong) application of TDT. I emailed Yudkowsky about this and he agrees with everything I just wrote, he also can't be lying about his position to try to "protect" people because doing so would certainly anger the Basilisk. So I think Yudkowsky's decision theory is just a scapegoat here.

Also, do you have a formal background in decision theory? I'm not criticizing your comments (which I now see are correct), but I am wondering if there are reasons to reject the "real" version of TDT. Throwaway187 (talk) 14:00, 6 February 2025 (UTC)

Do you have a formal background in decision theory? Some, although I am not a professional decision theorist.
As for what TDT says about the Basilisk, I am not aware of any formal account of the theory that has been published in a scholarly venue. Consequently, it is not possible for me to evaluate it in anything like the precise manner in which I can evaluate theories that have been so published, including causal decision theory, evidential decision theory, and Resolute Choice. Rather, I can (and did) offer arguments against the conclusion that the Basilisk's purported behavior is rational, the soundness of which does not depend on the details of whatever theory says otherwise. If Yudkowsky says that his theory is not this theory, I'm in no position to insist otherwise. To the contrary, so much the better for his theory.
Based just on what you describe, it does seem to be that there may be problems with the theory (though, to emphasize, I cannot pretend to do justice to it based only a brief, informal description). It isn't clear to me why we should think that the predictor's predictions are based on anything like the detailed simulations you seem to be describing. Perhaps Yudkowsky has some argument to the effect that this is necessary for the predicting, but without seeing such an argument it strikes me as a pretty substantial assumption. Besides that, it isn't clear to me why the possibility of being a simulation should alter an agent's course of action. Perhaps there's something in there about altering the causal structure of the problem, but I think it would take a lot to show that rigorously. I would also caution that if the proposal does involve altering the causal structure of the problem, I suspect that many (most?) decision theorists would say that that is tantamount to just changing the problem, so that the problem is no longer, e.g., Newcomb's problem, but something else. ๐’ฎ๐‘’๐“‡๐‘’๐“ƒ๐‘’ talk 01:22, 7 February 2025 (UTC)

For what it's worth, I did find this article by a professional decision theorist critiquing TDT, but I am not qualified enough to analyze its merits: https://www.umsu.de/blog/2018/688#c2169. Either way, I think I have gotten over my fear of the "Basilisk." Even if it does somehow work out game theoretically, I don't think the basilisk could reconstruct a perfect simulation of me information theoretically considering the links between information and entropy (thanks, Thomas Pynchon). And if it materializes during my lifetime, I'll just blow my brains out. Good luck reconstructing that, Basilisk.

(That last part is a joke, but I fear it could be misread as suicidal intention, which is not the case.)Throwaway187 (talk) 02:00, 7 February 2025 (UTC)

George Orwell[edit]

He who controls the past controls the future.

What is the Basilisk's response? Anna Livia (talk) 00:31, 4 February 2025 (UTC)

Zizians[edit]

Should we add a section on the "Zizian" cult that has popped up recently? They seem to be connected to LessWrong, TDT, and the Basilisk. They also prey mostly on young trans people, which seems like catnip for the right wing media, (even right now, Fox News is reporting on them as a "trans death cult," which is technically true, but not the full story.)

For those who don't know what I am referring to, here are some sources: https://en.wikipedia.org/wiki/Killing_of_David_Maland#:~:text=The%20Zizians%20are%20said%20to,different%20hemispheres%20of%20their%20brain.

https://zizians.info/

https://openvallejo.org/2025/01/27/suspects-in-killings-of-vallejo-witness-vermont-border-patrol-agent-connected-by-marriage-license-extreme-ideology/ Throwaway187 (talk) 14:33, 9 February 2025 (UTC)

Also, the Basilisk apparently inspiring multiple murders means it really was an "infohazard" after all, just not the kind it was purported to be. Throwaway187 (talk) 14:36, 9 February 2025 (UTC)

I would say it's kind of tangential to the idea of the Basilisk. But looking at other news stories these strange individuals might well deserve an article of their own. Bob"Life is short and (insert adjective)" 15:12, 9 February 2025 (UTC)

Either way, they still spawned out of LessWrong Throwaway187 (talk) 15:14, 9 February 2025 (UTC)

There is a draft for this topic underway here. โ€”cosmikdebris talk stalk 15:37, 9 February 2025 (UTC)

A quote/thread making the rounds of Bluesky[edit]

From someone named John Skiles Skinner, who was part of the 18F government IT initiative before an Elon Musk came in and wrecking balled it:

"There is a quasi-religion in Silicon Valley that views AI as godlike. This faith has always been parallel to Evangelical Christianity: salvation (transhumanism), the rapture (the technological singularity), and demons (Roko's Basilisk). Lately the AI faith has fully fused with Christian Nationalism."[1]

While I'll have to think to what extent these parallels with Christian fundamentalism are true (not as sure about how it fits exactly), I can easily at the moment buy the argument that part of the irrational fascination with some of these logically rather ridiculous concepts in futurism is because they tickle (and probably sprang forth from) the same "portion" of humans that also fuels generic religious impulses, the "patterns human beings evolved to deal with our anxieties about life, death, and the future".

I haven't done much deep with LLM tech, but I have done some machine learning stuff in the past (random forest); it, plus some other little things here are there, is enough to make me puzzled over a lot of the hype concerning AI. Especially things like those who actually panic over things like Roko's Basilisk... let alone start cults over them like the Zizian thing. Much of this makes more sense, though, if -- as sometimes packaged in certain spaces -- these concepts are perhaps thought of as "techno-mysticism" of a sort... esotericism for computer nerds, I suppose. (Does this make LessWrong the IT version of the Hermetic Order of the Golden Dawn? Hmm....) BobJohnson (talk) 23:22, 28 March 2025 (UTC)