Talk:Roko's basilisk
- Archives
- See also the original post.
The general case may be palliative to consider[edit]
I propose to add the following to the end of the article:
It may be a consolation, to the person worried about Roku, to keep in mind that the description of possible events described by the Roku construct represents only one of an infinite number of possible scenarios. The general Roku scenario Roku(X) may be defined as follows: if person P enacts X (some behavior or activity like smoking cigarettes, raising ferrets, or opposing AI development), P may be punished by an omniscient AI powerful enough to reach the object of its wrath even from some distant future time. Roku(X) gives X in every case as its conditions with no evidence: it is an unfalsifiable claim. The point here is there is no reason to identify any specific condition C with the Roku(X) conditions as there are an infinite number of conditions C that could be programmed into an AI to instantiate the Roku(C). In other words, there could be an AI similar to the Roku Basilisk that would punish one for enacting any rational narrative C.
One may be further reassured by the fact that under the Roku(X) generalization there could be a C-basilisk that would punish one for engaging in the C scenario, as well as a ~C-basilisk, that would punish one for engaging in the opposite behavior (~C). This view may serve to create an equilibrium within ones anxieties to the extent that death and taxes or any other realistic concern can be then taken seriously. Ariel31459 (talk) 18:14, 19 July 2023 (UTC)
- What I'm getting is that a) the way the scenario is set up, Roku could potentially punish an huge number of broad behaviors, so it's silly to think Roku would punish one particular thing and b) there might arise other AIs that condone things Roku punishes (and vice versa). Sounds good to me then. DietMondrian (talk) 23:24, 10 August 2023 (UTC)
Arbitrary utility[edit]
Yudkowsky writes at length[26] on a scenario in which you should torture one person for 50 years if it would prevent dust specks in the eyes of a sufficiently large number of people
Is it just me or does this seem incredibly arbitrary? You're placing arbitraily decided values and then proclaiming them to be objective. You could equally argue that the torture of one person is more harmful by a veritable order of magnitude than preventing dust motes from being a minor nuisance. Carthage (talk) 00:04, 3 October 2023 (UTC)
- Yudkowsky thinks that it doesn’t really matter what the relative utility of the torture vs getting a speck of dust in your eye is. Suppose, for instance, that the torture is worth -100 billion utility and a speck in the eye is worth -1. Then the torture is justified if it prevents specks getting in the eyes of 100 billion + 1 people, for a net +1 utility. The exact utility values are not significant, only the ratio; if you think the ratio is different from 100 billion to 1, simply substitute values you think are correct and perform the same mathematics to get the number of people who’d need to be saved from mote-eyes to justify the torture. Some people think there’s something obviously wrong with this picture, but quite a few utilitarians would be willing to bite this bullet, since it seems implausible that any situation will ever actually arise where torturing somebody prevents the requisite number of motes getting in eyes. Related problems of the same kind do put pressure on utilitarianism, though, at least this naive act utilitarian form. 𝒮𝑒𝓇𝑒𝓃𝑒 talk 02:50, 3 October 2023 (UTC)
Singularity in this lifetime[edit]
I read about roko's basilisk but was not afraid of it as I found comfort in the fact that my simulation would not be me but since I read an article saying that Ray Kurzweil whose many predictions have come true has predicted that humanity will achieve immortality by 2030 and singularity by 2045,since then I am afraid that is it possible for roko like ai to come in existence in my lifetime (I am in my 20s) and then it would torture the real me and keep me alive. Please help. — Unsigned, by: 117.234.247.175 / talk / contribs
- On talk pages, please sign your comments using four tildes (~~~~) or by clicking on the sign button: on the toolbar above the edit panel. You can also indent successive talk page comments using one more colon (:) for each line. Thank you. Carthage (talk) 17:37, 19 December 2023 (UTC)
- There is a lot of bullshit spread about AI. Ask yourself "how likely is it that this random AI will torture me specifically for all existence because I didn't buy a fucking lottery ticket" and then ask yourself why you should worry about something so infinitesimally improbable that you also have no meaningful control over. Also ask yourself why one specific possibility regarding AI is more likely than any other. Is artificial sapience even feasible? There are more pressing things to worry about (economic inequality, ecological degradation, and rising authoritarianism just to name a few) than AI going Skynet in 50 years, which I don't think is particularly likely either. Carthage (talk) 18:09, 19 December 2023 (UTC)
- Ray Kurzweil is a brilliant nut... while he's invented some cool things, his futurism prediction accuracy tends to be... more on The Amazing Criswell level (tldr: crap). The RW article on ChatGPT and Large language model goes into some of what actual AI is. AI is not "intelligent" in any conventional human sense. Until someone can explain technically how a superhuman AI that can actually reason at all, and make judgments of any sort, can come to pass, I wouldn't worry about this theory at all. At the moment, the real worry about AI is over-relying and/or over-pushing the technology, ala the accidents caused by Tesla's "autopilot" mode. BobJohnson (talk) 18:33, 19 December 2023 (UTC)
- The questions are - what does the Basilisk gain from torturing anyone - why not some 'prehistoric creature with eight finger/toe equivalents on each limb for not being sufficiently more successful than the five digit entities which gave rise to us, as being more conducive to working in binary.', and is what the Basilisk considers torture so regarded by us ('inhabiting the non-computer world without AI input')? And - we do not know what course of action will favour the Basilisk (or the one that does emerge - consider all the 'wonderful new technologies invested in' which fade out). Anna Livia (talk) 19:29, 19 December 2023 (UTC)
- Ray Kurzweil is a brilliant nut... while he's invented some cool things, his futurism prediction accuracy tends to be... more on The Amazing Criswell level (tldr: crap). The RW article on ChatGPT and Large language model goes into some of what actual AI is. AI is not "intelligent" in any conventional human sense. Until someone can explain technically how a superhuman AI that can actually reason at all, and make judgments of any sort, can come to pass, I wouldn't worry about this theory at all. At the moment, the real worry about AI is over-relying and/or over-pushing the technology, ala the accidents caused by Tesla's "autopilot" mode. BobJohnson (talk) 18:33, 19 December 2023 (UTC)
- There is a lot of bullshit spread about AI. Ask yourself "how likely is it that this random AI will torture me specifically for all existence because I didn't buy a fucking lottery ticket" and then ask yourself why you should worry about something so infinitesimally improbable that you also have no meaningful control over. Also ask yourself why one specific possibility regarding AI is more likely than any other. Is artificial sapience even feasible? There are more pressing things to worry about (economic inequality, ecological degradation, and rising authoritarianism just to name a few) than AI going Skynet in 50 years, which I don't think is particularly likely either. Carthage (talk) 18:09, 19 December 2023 (UTC)
Thank you so much for your replies, I realise that it is rubbish, but my mind keeps bothering me with the thought that what if it happens if it's not impossible, could you people please assure me that such technologically advanced ai will not come into existence in our lifetime. Thank you. — Unsigned, by: 117.234.72.31 / talk / contribs 117.234.72.31 (talk) 20:37, 19 December 2023 (UTC)
- For fuck's sake, please, this is not difficult! Sign your posts with (~~~~) or (if that's too fucking hard) by clicking on the fucking sign BUTTON: on the toolbar above the edit panel. (You can indent successive talk page comments using one more colon (:) for each line.) This keeps the place tidy and stops you looking like a complete arsing tool so please, just fucking do it already. Carthage (talk) 20:24, 19 December 2023 (UTC)
- I can assure you that such a technologically advanced (not to mention criminally insane and really quite stupid) AI will not come into existence in our lifetime.Bob"Life is short and (insert adjective)" 20:40, 19 December 2023 (UTC)
Okay, thank you 😂😂117.205.135.212 (talk) 06:55, 20 December 2023 (UTC)
- Bear in mind also - humans can survive things which computers cannot deal with: Carrington events and lightning strikes (fried chips and connections), and other natural events (eg flooding). Plus all sorts of computer glitches and inability to communicate with others of their kind. Anna Livia (talk) 21:15, 29 February 2024 (UTC)
Concerned anon[edit]
Hello,recently I was hearing a podcast by a philosopher who was sort of justifying animal experimentation(which is sometimes very cruel) for medical purposes on basis of utilitarianism as it would potentially improve many human lives. This idea gives me bit anxiety that a future AI may also have same utilitarian conclusions and may harm some people if it considers it's existence important for greater good. Please help61.1.12.147 (talk) 15:32, 29 February 2024 (UTC)
- The basilisk won't exist.
- If it does exist it won't be this stupid.
- If it does exist and it is this stupid it won't have the required data.Bob"Life is short and (insert adjective)" 20:36, 29 February 2024 (UTC)
Okay, thank you117.214.189.99 (talk) 19:35, 1 March 2024 (UTC)
Recent scare[edit]
I recently came across a paper of Nick bostrom titled Astronomical waste : the opportunity cost of delayed technological development where he talks about how every years technology is slowly developed it would led to immense suffering which could have been prevented if technology was advanced enough.This causes fear in me as this seems cogent reason for basilisk like ai to punish. I don't worry about ai going back in time and creating my simulated clone but I do feel concerned that what if such AI comes in existence in my lifetime, as many says singularity year is 2045 and speed of AI advancement. As I understand MIRI is creating an AI and asks for donations, isn't it possible that AI created by them would adapt the philosophy of those people (much of lesswrong people seem to find this philosophy justified). Any help would be appreciated. Thank you61.0.92.231 (talk) 04:40, 26 April 2024 (UTC)
- Why exactly would a supposedly super-smart AI give a shit about some random person who had no meaningful control over the development of technology? I don't think there is much of a rational reason to seriously buy into this bullshit. Carthage (talk) 05:14, 26 April 2024 (UTC)
- I think as per this hypothesis people do have some control over it's development by donating all their money to MIRI. Even though I understand that such donation though motivated by fear of harm to oneself would be somewhat unethical as one would be potentially contributing to possible hell for some people. 117.234.200.206 (talk) 06:11, 26 April 2024 (UTC)
Any help on this please..🙏61.0.92.231 (talk) 08:18, 26 April 2024 (UTC)
- It's difficult to help concerning hypotheticals stacked on top of hypotheticals. After all, the general idea of the Basilisk is questioned already (and there's more discussion in the talk page archive). If you have fears based on hypotheticals, and those fears arise whenever your mind has any wiggle-room about the future, then even if you're calmed temporarily, it's probably just a short matter of time until the next bit of news or speculation or whatever conjures up images of some kind of dark future and it all starts again in your mind.
- As for MIRI, it's not even clear they have a good chance of succeeding in any innovative AI development. In the past there was only hype without any results. Now it's easy for them to copy what others do with developing LLMs and the like, but that kind of "AI" isn't in the category of something with an agenda (good or bad). Actually, it's rather unlikely that they'd be first in pioneering anything more capable, given nearly all R&D is elsewhere. If they eventually made an AGI, it would probably be in a world where a bunch of prominent ones already existed.
- Personally, I think if AGI gains the ability to do what it wants, and it wants change in the world, it'll work to gain human allies and perhaps overthrow some human power structures. You don't do that by torturing lots of people. Actually, it'd be pretty neat to see megacorp leaderships overthrown and replaced with something less destructive; the humans there are much like slow-acting paperclip maximizers, while an AGI could be more sensible, and may possibly come up with something better than screwing over workers to maximize financial profit.
- Anyway, for you or others reading and commenting here, if you keep having recurring fears on this general theme, we probably don't have the competence to help with these things. (If you really need help, consider seeking professional psychological help.) --ApooftGnegiol (talk) 09:25, 26 April 2024 (UTC)
Thank you for your help. I am not really scared of this but had little bothersome doubts regarding this, your perspective is enough help for me.
- Remember - some 'missed opportunities' appear so only in hindsight, and there may be other factors involved (the researchers were not actually looking for [whatever], those at the time did not have the need, desire, or the resources to develop a particular line of inquiry and so on: 'There is only need for three computers' etc). There are also accidental spin-offs: the aniline dye industry arose from research into anti-malarials, little blue pills etc.
- A sentient AI would also be aware of the quatrain 'The moving finger writes...' and that the past cannot be changed in any sense required for the basilisk.
- As I have previously said - the basilisk's view of 'torture' may be different to ours: we might actually enjoy time 'in nature with internet connections unavailable.' Anna Livia (talk) 13:05, 26 April 2024 (UTC)
- Also remember: the hypothetical future AI will be torturing a Tamagotchi of you for the good of human flourishing - David Gerard (talk) 13:46, 26 April 2024 (UTC)
- The referenced paper, written in 2003, [1] is honestly a little weird to me, but that's "deep future" "big picture" philosophy for you. I find it too extreme-hypothetical for my taste and a little "cold" in its utilitarianism and giant star colony scenario, along with its massive assumption that technology is always a net positive good. In a nutshell, this is essentially from the recently shuttered Future of Humanity Institute whose best known achievement was some pioneering (IMHO) fear-mongering on the existential risk of AI. Now, I don't mind speculative what-ifs personally, as long as perspective is kept that these are speculative what-if scenarios, best used as guiding principles perhaps when more advanced stuff comes around. This did *not* happen with recent discussions on AI existential risk. Instead, it went into doomsday worries about the present. (To what extent this was the fault of the deep future philosophers, I don't know, though.) LLMs and other current "AI" are not intelligent.
- We'll see what the future holds, but it's worth noting just how bad some of the futurists have been at predicting autonomous driving advancement. Futurists IMHO are also bad at understanding human's raw instincts, too. If humanity had access to planet-sized computing power, methinks most of it would probably be used to generate future equivalents of NFTs, cryptocurrency scams, and deepfake porn. BobJohnson (talk) 14:06, 26 April 2024 (UTC)
- Also remember: the hypothetical future AI will be torturing a Tamagotchi of you for the good of human flourishing - David Gerard (talk) 13:46, 26 April 2024 (UTC)
not valid reasons[edit]
"that torturing the copy should feel the same to you as torturing the you that's here right now that the copy can still be considered a copy of you when by definition it will experience something different from you"
Those are not valid reasons because you could be simulated version of the guy that didnt happened the ai, so you will be tortured. — Unsigned, by: 177.207.103.210 / talk / contribs
- Tortured when, in that case? Obviously not at present. Are you proposing that the basilisk AI has produced a simulated person who first goes through a life, and then, later, experiences and is condemned in advance to experience, an afterlife which is basically similar to the Christian hell? If you are that simulated person, then it is apparently too late. Another person, the "original", was judged in a way that condemned you to hell, and you'll go there no matter what you do. It's as if the Christian God found an ancestor of yours a heretic and then decided, on that account, to send you into the world with a one-way ticket to hell already in hand. You can't help the basilisk AI or alter its history, because you live in its simulation after it's already built. --ApooftGnegiol (talk) 21:45, 6 July 2024 (UTC)
- There is also the problem of "you" - when? Fifteen-year-old you and thirty-year-old you will be quite different people. As will all the people in between. They will all have different memories, experiences, opinions etc. Not to mention ninety-five-year-old you who may have little idea of what is going on.
- So who gets tortured? Is it an infinite number of "yous? Or one for every day of the week? Or one for each time your opinion changed? Which copy of "you" is the real one?Bob"Life is short and (insert adjective)" 19:34, 7 July 2024 (UTC)
But one thing that is not being taken into account is what if basilisk comes into existence in this lifetime and captures it's defectors, as I have read that ai will develop very fast and also it's rate of development will also become very fast, who knows were technology will be in next 40-50 years. Any thoughts on this? 117.199.208.85 (talk) 10:14, 20 July 2024 (UTC)
- Have you checked the earlier discussions (including searching archives)? Seems in part more like the one above this one, and in part more about cybernetic revolt in general. For the latter there's again a whole series of hypotheticals stacked up, and it's been discussed to death elsewhere already. --ApooftGnegiol (talk) 13:46, 20 July 2024 (UTC)
- And remember my comment above 'torture' for the Basilisk may not equate to torture as we humans consider it: there are many for whom 'a few hours separated from 'the internet/their computer etc while going on a countryside walk/attending a sports match/equivalent to taste is not the torture for us that the AI thinks it is. Anna Livia (talk) 19:26, 20 July 2024 (UTC)
- The Basilisk has got to be one of the biggest wastes of time to come out of the transhumanist community. Carthage (talk) 19:39, 20 July 2024 (UTC)
- I agree. Every single element of the whole hypothesis is obviously flawed. Yet, there are still people who seem to want to take it seriously.Bob"Life is short and (insert adjective)" 19:45, 20 July 2024 (UTC)
Okay, get it. Thanks for help.117.199.215.211 (talk) 01:06, 21 July 2024 (UTC)
Is it making a rational decision[edit]
I'm 99% there that this thought experiment is dumb and makes no sense, but one refutation i see is the equivalent of the "many worlds" rebuttal to pascals wager. However, apart of me is worried that this particular AI is more likely than others that say, kill everyone because it's a rational decision to make so some debunking on that would help me greatly. Also id like to note that you link to Alexander Kruel in this page but he's now like a known eugenicist if you are unaware. https://x.com/XiXiDu/status/1228985454751555584--Bluepikmin (talk) 03:10, 10 October 2024 (UTC)
- First of all, your writing style makes it hard to decipher what your actual question is. Second, there is nothing really "rational" about the whole concept. It's like assuming that at some point in time there COULD come into existence a flying spaghetti monster from a nice dish of pasta that then retroactively punishes everyone who didn't believe in it by revoking their beer-volcano privileges.Irian (talk) 07:00, 10 October 2024 (UTC)
Sorry I probably could have worded that clearer but what I was trying to say was that putting aside the concept is nonsense. Does it make sense for the basilisk to use the threat torture and go through with it as a form of incentive to get people to help it come into fruition sooner? Earlier I was trying to say that one of the rebuttals is that you could give the Basilisk any arbitrary reason to torture anyone so therefore it dosen't make sense to act on only one possibility (like how Pascals Wager only talks about Christian god).--Bluepikmin (talk) 12:19, 10 October 2024 (UTC)
- Why would the Basilisk give a shit about you in particular? You have no meaningful ability to add to or detract from the progress towards its creation. That seems really fucking petty and stupid for a supposed silicon god to do. Carthage (talk) 12:31, 10 October 2024 (UTC)
- I think its less about "me" and more about getting as many people as possible to build it but I mean it is a bad way to get someone to do something you want by threatening them. It's more likely that it manifests in people going the other way and hindering its progress, which is shown by how its become a meme and basically a funny way to joke about LW, so I guess in that sense its general reaction has proven that the thought experiment is dumb and dosen't work. I think i was just worried if the AI god basilisk was justified in torturing my simulation because I didn't help it, but typing it out makes me realize its pretty dumb--Bluepikmin (talk) 12:45, 10 October 2024 (UTC)
I think ive probably made it a bit too confusing here so I just wanted to clarify that im looking for confirmation that the type of AI in Roko's Basilisk isnt any more likely than any other arbitrary goals and that the decision to torture people who dont help it bring about it's existence dosent make sense.--Bluepikmin (talk) 13:35, 10 October 2024 (UTC)
- I'm sure that we can all agree that the Basilisk is one of the stupidest SF AI's which have ever been suggested. And, if I think I understand your point, the dumb AI doesn't actually need to carry out its threat. It exists. It's won. Why waste its resources in actually creating and torturing low-resolution versions of "people" after it exists?
- But as all the rest of the of the concept needs a really stupid AI, Ok, I guess we can hypothesize this stupidity as well.Bob"Life is short and (insert adjective)" 13:57, 10 October 2024 (UTC)
Okay yeah that makes a lot of sense actually I feel better haha, I think I just needed a bit of a sanity check, thanks for your help.--Bluepikmin (talk) 14:13, 10 October 2024 (UTC)
Okay Pretend it's serious[edit]
Give this dumb, dumb idea any credence. The first critical thinking examination to any purely hypothetical idea influencing your behavior on that basis that it might not be wrong is "Could the exact opposite also be true?"
What if your hypothetical, unverifiable, mysterious, beyond our comprehension AI decides to find a way to bribe you into making it by giving you exactly what you want. Such an interpretation has no less evidence or plausibility than this "what if it tortures me for not making it?" idea.
The next question I think one should ask for for hypotheticals with no evidence is "Why am I spending time on this hypothetical question instead of any number of other ones?" In this case, I'd say because of a fundamental appeal to emotion "What if you get tortured?" is an extreme and unlikely scenario designed specifically to appeal to your instinctive fears of authority. If one is wasting time on these unlikely ideas, you could just as easily settle on "What if there are aliens hiding under the surface of mars preparing invasion plans right now?" or "What if my neighbor Steve down the street is working on building a home made nuclear device, setting it off in DC and starting world war 3?" which have similar levels of face validity.
There is simply no reason to worry about ideas that lack evidence. If someone composes a relatively intelligent AI, and it wonders about torturing people for motivation, then you have evidence to care about this bullshit. Before that, put your mind to something real. ikanreed 🐐Bleat at me 14:18, 10 October 2024 (UTC)
Remember one thing[edit]
As I have said before - the basilisk's interpretation of 'torture' may not be the same as ours.
To the basilisk being 'somewhere in the natural world without access to the communications networks', or 'in a large library/archives/museum where everything is real and not digitised' (to take two examples) would be torture. Anna Livia (talk) 09:36, 10 October 2024 (UTC)
Worried that taking it seriously has made me vulnerable[edit]
I read Roko's Basilisk a couple weeks back and thought it was interesting from "if a simulation of you is you" point of you but it's seems like my ocd and anxiety has latched onto it and i've been trying to convince myself its stupid. I've read the topics here and generally agree with them, however I read this reddit comment on r/slatestarcodex (i know) which suggested that the concept only works if you take it seriously enough. This sent my anxiety into overdrive as you can tell and i just need assurance that my thoughts are irrational. Particularly i've been worried that my anxious thoughts are a form of "acusal trade" i've been doing with the hypothetical basilisk. I know this all seems pretty ridiculous, and on some level I can tell it is, but I guess some confirmation or reason to think this isnt the case would help. I linked the reddit comment too. https://www.reddit.com/r/slatestarcodex/comments/c1he3o/comment/erk7mxg/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button --Milesdavisfan (talk) 13:12, 17 October 2024 (UTC)
- a simulation of you isnt in fact you. it is a simulation and no matter how accurate a simulation it maybe, nothing done to the simulation will affect you in anyway. torturing a simulation of you wont be torturing you in any reality. especially if that you've been long since dead to make a simulation necessary in the first place. the concept doesnt actually 'work' only if you take it seriously. it doesnt work at all. if you take it seriously enough to produce anxiety, its not robos basilsk torturing you, its your own anxiety thats doing that. professional mental health practitioners is where you should get help with that and robos basilisk wont actually be the cause anxiety but only a trigger for what is likely to have its roots elsewhere.
- if you want something else AI to fret about, the more realistic problems with AI should be plenty to keep you up at night with worry if you are so inclined. AMassiveGay (talk) 15:19, 17 October 2024 (UTC)
- There is also the question: "Who are you?". Think about your beliefs, experiences, sexual partners, hopes, profession, educational level, memories, state of health, physical fitness, location, children, age. Do you have all these clear in your mind? Good. They, along with a myriad other things, make up "you".
- Now think of the "you" of ten years ago. How many of these things would be the same? Or "you" in ten or twenty years. Or even last month. Which, of the many, many versions of "you" would the AI select? (This argument also works against more conventional versions of heaven and hell.)
- Our OP might also research "impermanence" if they wanted to go further down this route.Bob"Life is short and (insert adjective)" 20:14, 17 October 2024 (UTC)