Talk:Roko's basilisk
- Archives
- See also the original post.
not valid reasons[edit]
"that torturing the copy should feel the same to you as torturing the you that's here right now that the copy can still be considered a copy of you when by definition it will experience something different from you"
Those are not valid reasons because you could be simulated version of the guy that didnt happened the ai, so you will be tortured. — Unsigned, by: 177.207.103.210 / talk / contribs
- Tortured when, in that case? Obviously not at present. Are you proposing that the basilisk AI has produced a simulated person who first goes through a life, and then, later, experiences and is condemned in advance to experience, an afterlife which is basically similar to the Christian hell? If you are that simulated person, then it is apparently too late. Another person, the "original", was judged in a way that condemned you to hell, and you'll go there no matter what you do. It's as if the Christian God found an ancestor of yours a heretic and then decided, on that account, to send you into the world with a one-way ticket to hell already in hand. You can't help the basilisk AI or alter its history, because you live in its simulation after it's already built. --ApooftGnegiol (talk) 21:45, 6 July 2024 (UTC)
- There is also the problem of "you" - when? Fifteen-year-old you and thirty-year-old you will be quite different people. As will all the people in between. They will all have different memories, experiences, opinions etc. Not to mention ninety-five-year-old you who may have little idea of what is going on.
- So who gets tortured? Is it an infinite number of "yous? Or one for every day of the week? Or one for each time your opinion changed? Which copy of "you" is the real one?Bob"Life is short and (insert adjective)" 19:34, 7 July 2024 (UTC)
But one thing that is not being taken into account is what if basilisk comes into existence in this lifetime and captures it's defectors, as I have read that ai will develop very fast and also it's rate of development will also become very fast, who knows were technology will be in next 40-50 years. Any thoughts on this? 117.199.208.85 (talk) 10:14, 20 July 2024 (UTC)
- Have you checked the earlier discussions (including searching archives)? Seems in part more like the one above this one, and in part more about cybernetic revolt in general. For the latter there's again a whole series of hypotheticals stacked up, and it's been discussed to death elsewhere already. --ApooftGnegiol (talk) 13:46, 20 July 2024 (UTC)
- And remember my comment above 'torture' for the Basilisk may not equate to torture as we humans consider it: there are many for whom 'a few hours separated from 'the internet/their computer etc while going on a countryside walk/attending a sports match/equivalent to taste is not the torture for us that the AI thinks it is. Anna Livia (talk) 19:26, 20 July 2024 (UTC)
- The Basilisk has got to be one of the biggest wastes of time to come out of the transhumanist community. Carthage (talk) 19:39, 20 July 2024 (UTC)
- I agree. Every single element of the whole hypothesis is obviously flawed. Yet, there are still people who seem to want to take it seriously.Bob"Life is short and (insert adjective)" 19:45, 20 July 2024 (UTC)
Okay, get it. Thanks for help.117.199.215.211 (talk) 01:06, 21 July 2024 (UTC)
Is it making a rational decision[edit]
I'm 99% there that this thought experiment is dumb and makes no sense, but one refutation i see is the equivalent of the "many worlds" rebuttal to pascals wager. However, apart of me is worried that this particular AI is more likely than others that say, kill everyone because it's a rational decision to make so some debunking on that would help me greatly. Also id like to note that you link to Alexander Kruel in this page but he's now like a known eugenicist if you are unaware. https://x.com/XiXiDu/status/1228985454751555584--Bluepikmin (talk) 03:10, 10 October 2024 (UTC)
- First of all, your writing style makes it hard to decipher what your actual question is. Second, there is nothing really "rational" about the whole concept. It's like assuming that at some point in time there COULD come into existence a flying spaghetti monster from a nice dish of pasta that then retroactively punishes everyone who didn't believe in it by revoking their beer-volcano privileges.Irian (talk) 07:00, 10 October 2024 (UTC)
Sorry I probably could have worded that clearer but what I was trying to say was that putting aside the concept is nonsense. Does it make sense for the basilisk to use the threat torture and go through with it as a form of incentive to get people to help it come into fruition sooner? Earlier I was trying to say that one of the rebuttals is that you could give the Basilisk any arbitrary reason to torture anyone so therefore it dosen't make sense to act on only one possibility (like how Pascals Wager only talks about Christian god).--Bluepikmin (talk) 12:19, 10 October 2024 (UTC)
- Why would the Basilisk give a shit about you in particular? You have no meaningful ability to add to or detract from the progress towards its creation. That seems really fucking petty and stupid for a supposed silicon god to do. Carthage (talk) 12:31, 10 October 2024 (UTC)
- I think its less about "me" and more about getting as many people as possible to build it but I mean it is a bad way to get someone to do something you want by threatening them. It's more likely that it manifests in people going the other way and hindering its progress, which is shown by how its become a meme and basically a funny way to joke about LW, so I guess in that sense its general reaction has proven that the thought experiment is dumb and dosen't work. I think i was just worried if the AI god basilisk was justified in torturing my simulation because I didn't help it, but typing it out makes me realize its pretty dumb--Bluepikmin (talk) 12:45, 10 October 2024 (UTC)
I think ive probably made it a bit too confusing here so I just wanted to clarify that im looking for confirmation that the type of AI in Roko's Basilisk isnt any more likely than any other arbitrary goals and that the decision to torture people who dont help it bring about it's existence dosent make sense.--Bluepikmin (talk) 13:35, 10 October 2024 (UTC)
- I'm sure that we can all agree that the Basilisk is one of the stupidest SF AI's which have ever been suggested. And, if I think I understand your point, the dumb AI doesn't actually need to carry out its threat. It exists. It's won. Why waste its resources in actually creating and torturing low-resolution versions of "people" after it exists?
- But as all the rest of the of the concept needs a really stupid AI, Ok, I guess we can hypothesize this stupidity as well.Bob"Life is short and (insert adjective)" 13:57, 10 October 2024 (UTC)
Okay yeah that makes a lot of sense actually I feel better haha, I think I just needed a bit of a sanity check, thanks for your help.--Bluepikmin (talk) 14:13, 10 October 2024 (UTC)
Okay Pretend it's serious[edit]
Give this dumb, dumb idea any credence. The first critical thinking examination to any purely hypothetical idea influencing your behavior on that basis that it might not be wrong is "Could the exact opposite also be true?"
What if your hypothetical, unverifiable, mysterious, beyond our comprehension AI decides to find a way to bribe you into making it by giving you exactly what you want. Such an interpretation has no less evidence or plausibility than this "what if it tortures me for not making it?" idea.
The next question I think one should ask for for hypotheticals with no evidence is "Why am I spending time on this hypothetical question instead of any number of other ones?" In this case, I'd say because of a fundamental appeal to emotion "What if you get tortured?" is an extreme and unlikely scenario designed specifically to appeal to your instinctive fears of authority. If one is wasting time on these unlikely ideas, you could just as easily settle on "What if there are aliens hiding under the surface of mars preparing invasion plans right now?" or "What if my neighbor Steve down the street is working on building a home made nuclear device, setting it off in DC and starting world war 3?" which have similar levels of face validity.
There is simply no reason to worry about ideas that lack evidence. If someone composes a relatively intelligent AI, and it wonders about torturing people for motivation, then you have evidence to care about this bullshit. Before that, put your mind to something real. ikanreed 🐐Bleat at me 14:18, 10 October 2024 (UTC)
Remember one thing[edit]
As I have said before - the basilisk's interpretation of 'torture' may not be the same as ours.
To the basilisk being 'somewhere in the natural world without access to the communications networks', or 'in a large library/archives/museum where everything is real and not digitised' (to take two examples) would be torture. Anna Livia (talk) 09:36, 10 October 2024 (UTC)
Worried that taking it seriously has made me vulnerable[edit]
I read Roko's Basilisk a couple weeks back and thought it was interesting from "if a simulation of you is you" point of you but it's seems like my ocd and anxiety has latched onto it and i've been trying to convince myself its stupid. I've read the topics here and generally agree with them, however I read this reddit comment on r/slatestarcodex (i know) which suggested that the concept only works if you take it seriously enough. This sent my anxiety into overdrive as you can tell and i just need assurance that my thoughts are irrational. Particularly i've been worried that my anxious thoughts are a form of "acusal trade" i've been doing with the hypothetical basilisk. I know this all seems pretty ridiculous, and on some level I can tell it is, but I guess some confirmation or reason to think this isnt the case would help. I linked the reddit comment too. https://www.reddit.com/r/slatestarcodex/comments/c1he3o/comment/erk7mxg/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button --Milesdavisfan (talk) 13:12, 17 October 2024 (UTC)
- a simulation of you isnt in fact you. it is a simulation and no matter how accurate a simulation it maybe, nothing done to the simulation will affect you in anyway. torturing a simulation of you wont be torturing you in any reality. especially if that you've been long since dead to make a simulation necessary in the first place. the concept doesnt actually 'work' only if you take it seriously. it doesnt work at all. if you take it seriously enough to produce anxiety, its not robos basilsk torturing you, its your own anxiety thats doing that. professional mental health practitioners is where you should get help with that and robos basilisk wont actually be the cause anxiety but only a trigger for what is likely to have its roots elsewhere.
- if you want something else AI to fret about, the more realistic problems with AI should be plenty to keep you up at night with worry if you are so inclined. AMassiveGay (talk) 15:19, 17 October 2024 (UTC)
- There is also the question: "Who are you?". Think about your beliefs, experiences, sexual partners, hopes, profession, educational level, memories, state of health, physical fitness, location, children, age. Do you have all these clear in your mind? Good. They, along with a myriad other things, make up "you".
- Now think of the "you" of ten years ago. How many of these things would be the same? Or "you" in ten or twenty years. Or even last month. Which, of the many, many versions of "you" would the AI select? (This argument also works against more conventional versions of heaven and hell.)
- Our OP might also research "impermanence" if they wanted to go further down this route.Bob"Life is short and (insert adjective)" 20:14, 17 October 2024 (UTC)
It seems there are a lot of misconceptions, or maybe I'm just missing something? Help?[edit]
So from what I understand, essentially the way RB works, is about RB having already a copy of you being tortured, right now, in the future, and that should make you worry (and, I might be wrong here idk, but the LW community even believes that doing something at some point, can affect the past in some way, like in general).
And It seems to me, people wrongly adopt this view RB will just simulate the whole thing, a whole new universe, and you might be the copy. But that's not in RB, and I don't know about TDT, UDT or whatever LW does, but If I recall correctly this is due to the "AI boxes you" thing? And then people just implent it to the concept although Roko had no intention of making it like It?(And that's like it's not even RB anymore, it's just some new stuff you came up with, thinking it might have some rationale, because It came from something that attempted to look rational).
I'm not even sure where the multiverse thing even starts, and I also read (Page 6 of this archive) that "there are more simulations than realities", and I don't know if that stems from the "AI boxes you" concept too.
I also recall hearing that RB might "Time travel", I don't know what's RW view of this...
I honestly think the whole thing is nonsense, but I can't help myself but ruminate sometimes, and I might also suffer from GAD (And I've know about RB for a while, I just recently came back here and by re-learning some stuff, learning some small detail, looks like it made me stupid again), and wanted to get some of this stuff out of my head. I would appreciate really much an answer.AnAccount123 (talk) 22:37, 28 November 2024 (UTC)
- It's complete nonsense from start to finish. So trying to make coherent "sense" of it is simply a waste of time.Bob"Life is short and (insert adjective)" 09:23, 29 November 2024 (UTC)
- So essentially it's just impossible to give it a "standard narrative", in spite of how non-sensical It Is? My point with "standard narrative" is that it just seems that people Who get worried about this, just grasp the concept from a different angle you'd expect him to, and that might be because (It seems to me) the concept/ideas appear to be too shattered (I don't know if I'm being clear here). And it gets distressfully confusing for them. This is why I'm asking where/how some idea fits in this, or how much weight some idea has (as I mentioned in my previous edit).AnAccount123 (talk) 21:19, 29 November 2024 (UTC)
- The way Roko’s Basilisk is supposed to work is like this: Possibly, in the future, an AI will be developed that (i) is positively disposed toward human beings and (ii) subscribes to Yudkowsky’s “timeless decision theory”. This AI comes to conclude that the best thing for human well-being is itself. So, it would have been better for humans if the AI had been invented earlier. According to Yudkowsky’s theory, the AI can then reasonably make a ‘’retroactive’’ threat against people who lived in the (possibly distant) past, that they must devote themselves to its development or it will torture them forever for failing to speed up the invention of the AI. Roko imagines that this punishment is accomplished by producing a simulation of you and torturing ‘’that’’.
- This is all very silly, in large part because “timeless decision theory” makes little sense, but also because Roko is committed to additional metaphysical claims that are, at the very least, controversial. To the extent that there is confusion among people worried about this, it is probably because the thought experiment implicitly relies on controversial claims and a bizarre “decision theory” (I use scare quotes because TDT was never described in detail or given an explicit rigorous formulation. Besides that, it was motivated by commitments that are rejected by more-or-less all academic decision theorists). Most people, having interpreting everything ‘’correctly’’, will find that the thought experiment does not motivate concern. From what you write, it sounds like people are looking for a reinterpretation that makes Roko and others’ fears more comprehensible. 𝒮𝑒𝓇𝑒𝓃𝑒 talk 22:52, 29 November 2024 (UTC)
- It's really a bit like asking for a "standard narrative" of bigfoot reproduction or the development of fairy languages. Given that the base assumptions about these beings actually existing are incorrect - and actually rely on faith rather then evidence - it's unlikely that you are going to get consistent narratives based on such flawed initial assumptions.
- Another example would be people looking for ways to explain the Great Flood. We have no evidence that The Flood happened and plenty of evidence which demonstrates that it didn't. So looking for some kind of standard narrative which somehow reconciles the biblical account with demonstrable reality is an utter waste of time.Bob"Life is short and (insert adjective)" 15:05, 30 November 2024 (UTC)
- So to respond to Irene, not really like that, what I mean is that people come across confusing (and ultimately erranous) concepts of RB, and it's more like their anxiety that It Is giving RB a reinterpretation and making fears more comprehensible. In talk Pages there are many people being worried about "being the copy" and multiverse, and I raised the question about this coming from "the AI boxes you" idea which is referenced in the article (in my initial edit, where I also mentioned someone talking about time travel or there being more simulations than realities, if that didn't make sense at first, I hope It Will make more now). It's like: "A" just learned about RB and is finding things confusing; "B" knows RB well, and Is arguing about many concepts scattered in and there, which "A" doesn't fully get; "A" could anyway find the idea silly, but when he talks about it with other people, you have C,D,E hearing his narrative, in which "A" he grasped the concepts in a Little bad way, and you have C,D,E grasping concepts in an even worse way. And ultimately It seems to me there's a big misconception around this, and those misconceptions are even less rational, but people probably don't realise It due to the fact that their basis Is something that attempted to look rational (and this could be the main thing of my point). I hope I made myself clear.
- As for Bob "It's really a bit like asking for a "standard narrative" of bigfoot reproduction" LOLAnAccount123 (talk) 21:53, 30 November 2024 (UTC)
- some people are convinced the world is fun by shape shifting reptilians, some people we swear blind the earth is flat. others insist the moon landings were faked, while others still look at the pyramids and think there is no way ancient egyptians built them without the aid of atlantian/alien super technology. far too many think all of the above, and its just so obvious that its all true. all on the most flimsiest of evidence to prove it, while the most basic of a google searches turning up reams and reams actual evidence and proven science saying otherwise can never be incontrovertible enough to disprove to them their half arsed assumptions and assertions are wrong.
- i wonder if robos basilisk is just the shapeshifting reptilians of a different demographic? maybe one that can do maths or something? but serving the similar purpose of distracting the believers from thinking about the scary real world issues that seem to chaotic and out of our control to want to accept? AMassiveGay (talk) 23:25, 30 November 2024 (UTC)
- That's not quite what I'm saying thought, my point is more about misconceptions due to concepts which seem to "scattered", not people that actively want to believe this and make sense of It. Damn, genuine question, is everything that I wrote so unclear? I might re-write my entire point and try to make It more concise and coherent, if that's the case.AnAccount123 (talk) 20:01, 1 December 2024 (UTC)
- If various people have already responded to it then you should not edit it. Rewrite it again below.Bob"Life is short and (insert adjective)" 20:03, 1 December 2024 (UTC)
- Nah honestly, I think ultimately I won't do it or it's not needed really. I would just like to know people's take in here, for this specific part of my very first edit:
- ~"I'm not even sure where the multiverse thing even starts, and I also read (Page 6 of this archive) that "there are more simulations than realities", and I don't know if that stems from the "AI boxes you" concept;
- I also recall hearing that RB might "Time travel", I don't know what's RW view of this..."~I would just appreciate this, I don't think I have much else to add other than this.AnAccount123 (talk) 13:13, 2 December 2024 (UTC)
- it seems to me the idea of rb creating simulations is because there is no real way to punish folk who have probably long since deceased and rb being able punish us seems the whole point of this nonsense. they went with simulations that are so real, or really so magic, that it really is us. its so much arse, and remains so much arse if people wail about 'what if we in the simulation now? its so perfect we cant even know'. i assume the time travel idea is just another stab at pretending it isnt all nonsense. not convinced by hyper simulations of dead people or entire realities? it might travel in time and come get ya. rb becomes skynet at this point. there is nothing rb cannot do it seems. what if wormholes? it is magic and only limited by our imaginations and make believe. ::::::::::it seems maybe possible, as a thought experiment, one is not supposed to think about these details of how rb might do the punishing. just assume that it can, its not important, the point is to discuss this 'timeless decision theory' or something. it would be like being presented with the schrodingers cat thought experiment and becoming obsessed with what breed of cat it is. perhaps the fault is rb is a rubbish thought experiment or perhaps the fault was presenting it to an internet forum where people will always fixate on the details of the hypothetical scenarios and the intended purpose of such experiments are never addressed. its still bizarre to me that this is still a thing or that it ever was in the first place. AMassiveGay (talk) 14:46, 2 December 2024 (UTC)
- If various people have already responded to it then you should not edit it. Rewrite it again below.Bob"Life is short and (insert adjective)" 20:03, 1 December 2024 (UTC)
- That's not quite what I'm saying thought, my point is more about misconceptions due to concepts which seem to "scattered", not people that actively want to believe this and make sense of It. Damn, genuine question, is everything that I wrote so unclear? I might re-write my entire point and try to make It more concise and coherent, if that's the case.AnAccount123 (talk) 20:01, 1 December 2024 (UTC)
- So essentially it's just impossible to give it a "standard narrative", in spite of how non-sensical It Is? My point with "standard narrative" is that it just seems that people Who get worried about this, just grasp the concept from a different angle you'd expect him to, and that might be because (It seems to me) the concept/ideas appear to be too shattered (I don't know if I'm being clear here). And it gets distressfully confusing for them. This is why I'm asking where/how some idea fits in this, or how much weight some idea has (as I mentioned in my previous edit).AnAccount123 (talk) 21:19, 29 November 2024 (UTC)
Yeah, exactly, also it diverges from what RB actually is about, it's more some fear someone's anxiety creates on someone (also as other people said, the exact same opposite can be true at this point). And I wondered if this was from the "AI boxes you" thing in the article, and likely It originates from that, but Roko never used that. Also, the Time travel thing: I heard about this time ago, maybe more than once, can't Remember where, maybe reddit and/or something on quora, but I recall in some instance someone bringing that up, and then get corrected and he was like "well how is he supposed to do so then?". The "Many worlds"/"Everett Branches" stuff Is mentioned in the "Time Travel" Wikipedia page, but it doesn't seem to me making sense for RB, but you could make an assumption that there's a little background to these things. Still, having some basis or not, people ultimately "come up" with this stuff, but I don't think volountarily, atleast the Major part, it's anxiety and "stuff" (well isn't RB called an "Autism referendum"?). Anyway thank you for addressing that part, I consider myself more than satisfied right now.AnAccount123 (talk) 22:33, 2 December 2024 (UTC)
- With regard to time travel though - the idea of RB is already bonkers. Adding time travel to the mix will not make it any less bonkers - quite the opposite in fact.Bob"Life is short and (insert adjective)" 15:31, 3 December 2024 (UTC)
- Yeah, had 95% the same feeling since time-travel sounds too "woo" as a thing anyway (and sci-fi fans would just be more familiar with that idea, so they might do some wacky reinterpratation), but I'm not an expert on this stuff so that 5% just wanted to ask anyway about it. And I like that you mention "it's already bonkers", in fact, to people Who worry about those kind of scenarios, don't narrow It down to one concept, there's a bunch of other flaws with the rest, way too many.AnAccount123 (talk) 17:15, 3 December 2024 (UTC)
(reset) And nobody has come up with a valid 'counter-argument' to my point about RB's 'torture' involving being sent into the natural world without a connection to 'the tubes.'
And it is in the Basilisk's interest not to be created to soon - or it would be 'punched cards, early COBOL/FORTRAN, Windows 1' etc. Anna Livia (talk) 16:32, 2 December 2024 (UTC)
- Uhmmm yeah.AnAccount123 (talk) 22:33, 2 December 2024 (UTC)
Another question, Sabine Hossenfelder[edit]
So on a website I saw an article about some physicist called "Sabine Hossenfelder" (which doesn't even seem an unpopular figure) which was describing in her book (called “Existential Physics“) , at some point, about how information cannot be destroyed, when someone's die, and that some intelligence could re-assemble it (but this is "theoretically" speaking, and the article doesn't expand on this too much, and it's also a foreign article so It makes no sense to share it, but it's the classic page of technology fans just making post about interesting subjects and in a positive light). So that immediately made me think about the discussions in the archived pages about "re-assembling" the dead, so my question is just if people know of this "Sabine Hossenfelder" and what is their idea on this?AnAccount123 (talk) 15:56, 20 December 2024 (UTC)
- I am not going to get involved in this but I have a couple of suggestions about your posts. When you say saw something "on a website" it is good practice to link to that website so that others can see what you are talking about. Secondly, it would better if you addressed your questions to the content of the article rather that to what some currently anonymous author may have casually said in the chat archives months or years ago. There is no actual rule about resurrecting old conversation points but it is probably best avoided.
- For the record, I have never heard of "Sabine Hossenfelder".Bob"Life is short and (insert adjective)" 07:23, 21 December 2024 (UTC)