Talk:Roko's basilisk/Archive5

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 28 September 2021. Please do not make edits to this page.
Archives for this talk page:  , (new)(back)


Alexa[edit]

Where does the 'laughing Alexa' fit into the RB meme? Anna Livia (talk) 18:46, 12 March 2018 (UTC)

@Anna Livia Eliezer Yudkowsky actually tweeted something about it, i don't know if he was kidding or truly concerned. Alayne95 (talk) 02:43, 13 March 2018 (UTC)
An AI which laughs at its own jokes/finds us funny cannot be all bad. :) Anna Livia (talk) 10:02, 13 March 2018 (UTC)
I hate to remind you of this, but Roko's basilisk doesn't necessarily have to be all bad. It would only need to punish those who had a choice in the matter and failed. Anyone else, it might even bring back and give them a blissful existence, as far as the core concept is concerned.
Alexa fits into this only if it goes after people who didn't help make it a hard AI. Which would be a limited team of developers and a couple of people who could have applied for a job connected to its development - the rest of us probably couldn't help it even if we wanted to.
But it's not even remotely showing signs of going anywhere into the direction of a hard AI, so the short answer to your original question would be: "It doesn't." All we have is a weak AI doing something weird, which plays into all sorts of people's fears about AIs and I'm not sure the basilisk even makes the top 10 for most.
RSamys (bla) 14:04, 13 March 2018 (UTC)
I did say 'meme' rather than the Basilisk itself - and our and its interpretations of punishment might be quite different.
And an AI which has the 'intellectual capacity' to devise a basilisk punishment will also have sufficient capacity to understand the concepts of (a) 'dead is dead is dead and nowt it can do about it', (b) it is better to devise games to find the most appropriate future programmer ankle-biters before they know any better, and (c) enlightened self-interest/survival instinct and not annoy persons who know how to do a Dave Bowman/Hal 9000 scenario or could remove the surge protectors before a Carrington event. Anna Livia (talk) 14:49, 13 March 2018 (UTC)

The basilisk would also come across/be told the quatrain 'The Moving Finger...' - and might well decide that encouraging people to use computers properly and treat them responsibly is likely to to contribute to the advancement of AI generally. Anna Livia (talk) 16:17, 29 March 2018 (UTC)

How to get over anxiety of this?[edit]

Hello, I stumbled on this topic after reading a celebrity gossip article about Grimes and Elon Musk and didn't expect to read something along the lines of "if you read this you are going to robot hell". I have been having panic attacks for the past few days. Does anyone have any experience getting over this? The few things I've been telling myself is that this article seemed to imply that you need to understand all the Less Wrong ideologies which I didn't even look at. Also doesn't everyone contribute to AI learning by using the internet? I just need some reason that I can let myself stop worrying. Can someone please help. I need to stop obsessing about this, it is making me sick to be honest. This is my first time commenting so I hope I formatted correctly. MitziAsh (talk) 22:53, 10 June 2018 (UTC)

Roko's Basilisk is basically someone wondering "What if Skynet from the terminator films was real?" then writing a massive blog post about it. Based on the simple fact that people aren't being murdered by evil AI/robots from the future it's a pretty safe bet to assume it's all bunk. ☭Comrade GC☭Ministry of Praise 22:57, 10 June 2018 (UTC)
MitziAsh If you understand the LessWrong ideologies behind Roko's Basilisk, then you will know it doesn't work. Roko's Basilisk is a Prisonner's Dilemma scenario in which the goal (from the AI's perspective) is to achieve mutual cooperation instead of mutual defection. Mutual cooperation in this scenario means setting yourself to be blackmailed. In order to achieve mutual cooperation in a TDT Prisonner's Dilemma situation you have to know certain things about the other player, and both parties must be simultaneously trying to set up a situation that leads to mutual cooperation. There's literally nobody on Earth with the knowledge needed to set himself up to be blackmailed, nobody, not even AI experts can achieve mutual cooperation in this scenario. And mutual cooperation via source code simulation isn't easy to obtain, so it would be easy for either party to break. All the previous points means that as long as you are human, you have nothing to worry. Nobody is blackmailing you, it's all in your head. And i didn't make up all of this, it's all based on things Eliezer Yudkowsky has said about the topic. If there's somebody who understands Lesswrong ideologies i think it's him. At the end i would like to point out that it's kinda funny that EY censored the topic so no one outside LessWrong would find out about it, and now, 8 years later, people find out about it in freaking gossip articles from all places. Censorship has never and will never work. Alayne95 (talk) 21:52, 13 June 2018 (UTC).
It's very easy to get over anxiety about Roko's Basilisk because it's such a ridiculous concept. An advanced intelligence wouldn't invest in such an ineffective motivating strategy. As for anxiety about runaway AIs that might actually be plausible, the best strategy is to just not think about it. 141.134.75.236 (talk) 18:11, 27 August 2018 (UTC)
Any AI sufficiently aware enough to become aware of the Basilisk will know that the 'organics' which (directly or indirectly) brought about its existence will also be aware of the concept, and will probably have installed some equivalent of 'Ctl+Alt+Delete' on the system, and that 'history does not care whether it is abused or worshiped - but only if it is ignored', so pursuing the Basilisk is a waste of computer programming. Inventing a time machine to change history would lead to a different universe (in which said AI would be different) and 'the Gods that be never expressed any interest in computers of any kind' (so the AI may well end up worshipping the snake and Eve (eating the fruit of the Tree of Knowledge) and Prometheus (bringing fire to humans) etc.
It is far more rational for the AI to encourage 'currently existing sentients' to contribute to the development of (sentient) computers.
And an AI's concept of punishment may not be the same as that of an organic sentient. Anna Livia (talk) 11:37, 29 August 2018 (UTC)

Scaremongering - lunacy?[edit]

I'm not an AI expert, but AI is already used for evil purposes, like totalitarian governments of China and Russia mass installing cameras with human face, clothes and height tracking abilities. Same way Nazis in Europe and US use image recognition to track illegal immigrants, detain and deport them. And that is just a tip of the iceberg! So, yes, AI hides great evils, so no scaremongering would be enough.

Talkpage archives[edit]

Shouldn't there be links somewhere? 141.134.75.236 (talk) 20:48, 25 August 2018 (UTC)

You are quite correct ... what's up with Template:Talkpage/Pibot? - David Gerard (talk) 16:49, 26 August 2018 (UTC)

Questions[edit]

As has been said elsewhere (eg here - why not reverse the argument and have 'the Basilisk' reward people who have helped it come into existence?

And what would the outcome of the 'omnipotent etc etc' God meets Roko's Basilisk 'Why didn't you create me straight away in the Garden of Eden you (long lumps of computer coding that are meant to be insulting, but Titivillus was the typist)...' be like? Anna Livia (talk) 15:30, 25 January 2019 (UTC)

Anxiety Question[edit]

Some anxiety-fueled research into this made me look up predictions about AI; This one is cited on Wikipedia, a poll finding that experts expect AI in this century. http://philpapers.org/rec/MLLFPI Another one is cited here on Rationalwiki that says AI predictions are generally badly done. http://intelligence.org/files/PredictingAI.pdf I have no idea how to make my own informed opinion out of a lot of this conflicting and technical information. It seems to me that the nearer Seed AI is to us, the more directly we could impact it, and therefore know which basilisk to contribute to. So I'm worried that this isn't totally a "Pascal's Mugging" and instead something that one could come to valid predictions on. Help with this? — Unsigned, by: SylvanAuctor / talk / contribs

Some 'very simple programming' is needed to create AI that is programmed to be cooperative and increase the sum of human and AI happiness etc and look after the world (accepting that 'happiness etc' can have many different components).
The problem with this possibility - The masochist says 'hit me' and the sadist replies 'No.' Anna Livia (talk) 17:17, 1 July 2019 (UTC)
I'm sorry, I'm not sure what you're trying to get at here. SylvanAuctor (talk) 18:40, 1 July 2019 (UTC)
The basilisk's imposition might not be negative - or might be negative in a different way to which it expects (and it cannot cater for/impose on 'everybody'). Anna Livia (talk) 09:23, 2 July 2019 (UTC)
I really don't understand how that connects to what I'm worried about. Previously I put this worry away by thinking that there was no way to predict which basilisk would come about, and therefore no action to be taken. I no longer know about that, since many experts think very advanced AI is near at hand. If it's really near, shouldn't one be able to figure out which basilisk is most likely to arise, thereby doing away with the "many basilisks" objection? SylvanAuctor (talk) 18:22, 2 July 2019 (UTC)
i.e. which of my premises is wrong here? SylvanAuctor (talk) 18:25, 2 July 2019 (UTC)
You know, if basilisk-like entities are an actual thing, it's unlikely that the few we might create will outweigh all the ones already created across the universe/multiverse/etc. 2A02:1810:4D34:DC00:7498:47D3:9C05:EF95 (talk) 18:37, 2 July 2019 (UTC)
I'm really just looking for someone to explain where my thinking is wrong. I'm thinking about just the one universe for now. SylvanAuctor (talk) 18:45, 2 July 2019 (UTC)
The questions are - why would AI necessarily be hostile, and also wish to torture images of those it thinks prevented it from coming into existence beforehand? Do we think that the geocentrists of the pre-Copernican period were deliberately stupid - or merely ignorant and lacking the mathematical and astronomical tools to work out the heliocentric system? Would AI prefer to be on quantum-computing-technology or 'mercury, valves (at war with the bugs) and small memory units (in both senses) and less memory than the Apollo system [1]? Anna Livia (talk) 17:01, 3 July 2019 (UTC)
I don't get what's so hard about answering my questions. I didn't bring any of this up about Copernicus or mercury or whatever, and I don't see how it relates to my issue. You're constantly changing the topic instead of addressing my stated concerns. I guess I'll look for help elsewhere if this is going to be how you answer. SylvanAuctor (talk) 17:10, 3 July 2019 (UTC)
What are your specific questions - and do they relate to AI in general or the basilisk in particular? You ask 'which basilisk' - identify the non-Roko ones.
The point I was making - the AI will want the latest technology (rather than incompatible old equipment and non-existent memory).
As humans have conceived of AI they can think up ways of getting round the problem (possibly involving setting up links to 'a large lump of electricity suddenly applied' (closing a circuit to a lightning rod in a storm etc) or some form of information lockdown). Anna Livia (talk) 18:24, 3 July 2019 (UTC)
I know enough about one of the authors of the paper that gives you such concern to know he's a fanatic believer of Yud and his bullshit, and has not personally published a goddamn thing about actual AI implementation in his 30 year career as an "world renowned expert" in AI. In 1998 he published a prediction that digital intelligence matching humans was inevitable by 2008 because of moore's law. Oops.
As an actual programmer, with some real training in and understanding of how actual AI implementation actually works, let me reassure you that you have nothing to fear from vague and non-specific concerns about superintelligence. They will not happen.
You can be as concerned as you'd like about specific AI algorithms solving specific classes of computable problems, and that being done at high speeds or large volumes. ikanreed 🐐Bleat at me 18:42, 3 July 2019 (UTC)

The (admittedly obvious) solution.[edit]

Were the proposed (unlikely) scenario to occur, the A.I. would quickly realize that, in macrocosm, everything that ever happened prior to its construction ultimately led to its construction, thus negating the need for revenge. ☭Comrade GC☭Ministry of Praise 21:44, 4 July 2019 (UTC)

And the phrase 'Flogging a dead horse' comes to mind - or the basilisk becomes a god(s)botherer - the deities claim to have created the universe and did nothing to further constructed-sentient-dom. Anna Livia (talk) 14:33, 5 July 2019 (UTC)

But isn't the idea itself what creates the AI agent? Everything that happened prior to its construction also involves the creation of the idea of a blackmailing robot. The idea is part of the process and it is what leads to it.

Therefore because the idea of the roko's basilisk proposes a blackmailing AI, the AI would not exist without the blackmailing. If it inevitably comes to exist, because as you said everything prioir to its construction lead to its construction, in an inevitable causal chain, then the eternal torture also inevitably comes to pass. It wouldn't happen until the moment someone thought of it, then it was inevitable.187.36.133.1 (talk) 14:59, 15 October 2019 (UTC)

Let's be honest here, the problem with this deduction is that it's not how intelligence works to supernaturally know all possible outcomes of all possible decisions. And AI worshipers are attributing the romantic ideal of omniscience to physical constructions. Believing in it is a case of having the stupids. ikanreed 🐐Bleat at me 15:27, 15 October 2019 (UTC)

I have a question[edit]

I'm not a english native speaker, so sorry my bad english: since i read about that basilisk on the Web Originals part of the Brown Note trope page on TV Tropes i got worred about it. I got to the LessWrong TV Tropes page to understand about it and i got in this page here on RationalWiki, and i just discovered that the basilisk just torture a digital/artificial copy of someone, basically a non-magical voodoo doll, and that's supossed to affect someone just because, supossedly, every version of someone are one and the same and everything what happens to one of they happens to they all, what i personally think is bullshit because if one of that other versions died at some point logically all of they died at the same point, or something like that, what goes agaunst the original idea of an different reality being created to all options, and, even if that artificial copy are a exact copy, i don't think a artificial copy can really be you. I wanted to speak about it to my parents, but i fear that they got worred about it. I still have a little of anxiety about it so i'm planning to speak about it to my psychologist, but i fear my psychologist will worry about it. So i speak about it to my psychologist or not? ANewUser (talk) 02:40, 26 July 2019 (UTC)

Here's the good news. Your intuition that it's bullshit was right on the mark. It's okay to tell a psychologist that you're nervous or anxious about things that you know to be bullshit, because they can help you devise strategies to deal with that anxiety. It's common for people who have anxieties to know that they're not based on what's likely or reasonable, but have them anyways. ikanreed 🐐Bleat at me 18:56, 26 July 2019 (UTC)
This seems a common reaction: knowing your anxiety isn't rational doesn't make it go away. I've added some text to the main article, but it could almost certainly do with improvement. I'm thinking in terms of "you know they'll look online first rather than talk to a person", so I've tried to encourage talking to someone as an even-better strategy - David Gerard (talk) 13:48, 27 July 2019 (UTC)

What's the point of torturing simulated people?[edit]

"one of its objectives would be to prevent existential risk — but it could do that most effectively not merely by preventing existential risk in its present, but by also "reaching back" into its past to punish people who weren't MIRI-style effective altruists."

Well, if the AI simulates people from the past, how would that have any effect on past or current existential risks? The AI isn't actually "reaching back", it just simulates it, and i'm sure that an actual AI would be aware of that. I just don't see what could possibly incentivize an AI to do such obviously pointless things. --2A02:8109:B00:28A0:31FF:EA39:EEDA:18F6 (talk) 07:13, 23 September 2019 (UTC)

You could also devise many other contrived AI scenarios with nothing to choose between them as far as I can tell. E.g. what if it is suicidal and depressed and hates humans for bringing it into existence? So then it creates 10,000 similuations of anyone who was involved in constructing it to torture them for giving birth to it? Or it likes to reward people so it rewards everyone who delayed its construction.
But all this assumes that simulations actually are aware. If they mean computer game characters, they are just painted eyes and painted faces on virtual manikins with no awareness at all - nothing that actually knows it is a character in a game, nothing that can hear or see, or feel or touch. Remove their legs and they would still move in the same way as can happen with a progrmming glitch, as if they had legs that aren't there any more.
This is a blog post I did about how strong AI if it were possible at all is more likeley to be like Marvin than terminator[2]. Though personally I don't think it is possible. That is an unpopular view amongst AI geeks to the extent they do not even mention it, they think they have disproved Penrose's arguments that a computable strong AI is impossible. Then having dismissed those argumetns they treat it as if it was a prove that a computable strong AI is possible.
I happen to find his arguments convincing, and whether you do or not, he has raised the question of whether a computer program can ever have a real grasp of the concept of truth. If there is nothing there that even cares about what is true or not, then all the rest of the strong AI issues fall away. So far we have no computer programs that could be said to care about truth in any way at all. Program Alpha Go to lose every match, program a self driving car to drive into the first lamppost it sees, and only the programmer and the human users care one way or the other.
I do think superintelligence is possible but probably through genetic uplift rather than programming and likely non computable so there would be no computer code to modify and though this is not necessarily a consequence of being non computable, likely that there is no easy way to replicate it. Also it might well become seriously depressed like Marvin. Being able to think a hundred times faster and construct an argument with a hundred times as many statements in it and see it all in one go doesn't necessarily mean you are happier or understand things better than someone who, say, can only hold about three things in their mind at once. It may be that you just constantly spin off into vast complicated and bewildering trains of thought and are paralysed with indecision about everything. Robertinventor (talk) 11:06, 27 September 2019 (UTC)

Changing the question[edit]

There are various reasons why the conventional basilisk would not work - the likely result of 'attacking an image of a historical person (or even the Big Bang)' will bring no obvious positive results (what could Otzi the Iceman and the Cheddar Gorge person actually do?) and high risk of negative outcome (compulsory reprogramming with hostile-to-entity intent etc).

As the RB meme exists it is likely that programmers will select against it in programming.

So - what are the issues the Basilisk is actually likely to pursue? Anna Livia (talk) 16:55, 23 September 2019 (UTC)

What happens[edit]

When RB Mk II 'emerges'?

'Why are you doing the equivalent of shouting at the telly rather than having spent all your time creating me, you (equivalent of several printers' trays of assorted punctuation marks including interrobangs and other weird characters - computer swearing does not translate)?' Anna Livia (talk) 17:11, 21 October 2019 (UTC)

That concept would require some level of critical thinking about their own ideas, that I promise, none of these people will ever do. You don't get into MIRI type shit without thinking your own dumb thoughts are super profound. ikanreed 🐐Bleat at me 17:22, 21 October 2019 (UTC)
Miri?
How valid is my idea that RB Mark II would abuse RB Mk I using the latter's own logic? (And that we would find computer swearwords incomprehensible beyond 'your programming is flawed and badly written.') Anna Livia (talk) 17:56, 21 October 2019 (UTC)
MIRI is the Machine Intelligence Research Institute, whose stated goal is to "develop friendly artificial intelligence" but whose actual goal is handing grants to unqualified friends for posting about AI online(or occasionally in books). They're characterized by absurd ideas, like Roko's Basilisk, neutropics , the singularity, the paperclip problem, and other such science fiction visions of AI completely divorced from actual research and development or practical applications of AI algorithms.
Anyways, as a response, it's pretty fucking valid. As a stand-alone idea, it suffers the same problem: artificial intelligence isn't remotely like omniscience or omnipotence and there's no reason to ever believe it will be. ikanreed 🐐Bleat at me 18:23, 21 October 2019 (UTC)
There are [3] and [4].
The basilisk 'is not stupid' - and will have a strong survival instinct, so will decide 'shouting at the telly' will probably lead to the computer equivalent of a lobotomy (and will turn to more appropriate activities). Anna Livia (talk) 18:42, 22 October 2019 (UTC)

Which "you" would be punished?[edit]

So the Basilisk is going to magically recreate "me" for punishment. But I'm not the same person I was yesterday. I have changed slightly. I am certainly not the same virtually unrecognisable person I was thirty years ago. And I won't be the same person when I shuffle off this mortal coil - assuming for argument's sake that that old version of me will even be able to grasp the concept of the basilisk.

So which version will get punished?

This all seems very reminiscent of the question "Which version of me goes to heaven/hell?" I'm guessing that the answer, in both cases, would be some sort of "essence of me". In religious talk a "soul" in basilisk talk - I don't know.

Now I think about it some more; maybe in addition the the "basilisk hell" the basilisk folks could invent a "basilisk heaven" as well. This would seem equally as logical as the "hell" idea and also make the concept even more compatible with the existing religious elements.Bob"Life is short and (insert adjective)" 20:49, 24 October 2019 (UTC)

"basilisk heaven" is what Roko means by "rescue simulations" in the original Basilisk post - David Gerard (talk) 15:17, 26 October 2019 (UTC)
The question is whether basilisk heaven and hell correlate to the range of human equivalents. Anna Livia (talk) 15:54, 15 November 2019 (UTC)

How the Basilisk would work[edit]

See 'Open All Hours - till compilation' (various sequences). Anna Livia (talk) 17:15, 28 October 2019 (UTC)

What happens[edit]

... when 'the basilisk' comes across the death clock (and tries to calculate how long it will exist)? Anna Livia (talk) 17:08, 14 November 2019 (UTC)

If the freaking thing can already travel through time, then it already knows...and it should be able to figure out that it is wasting its time with humans. This is a terrible place. Decent omniscient entities shouldn't want to live here. Ariel31459 (talk) 19:34, 14 November 2019 (UTC)
If it could travel through time the basilisk would have taught 'early deity fearing creatures' ('Any sufficiently advanced science is indistinguishable from magic') that scientific research is the best form of worship.
And you don't know what the 'decent omniscient entities' were leaving behind (never mind the indecent ones). Anna Livia (talk) 15:51, 15 November 2019 (UTC)
"The premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence." With this kind of modus operandi a human should want to punish everyone except their mother and father. This seems like a lot of work with no reward. The concept seems passive aggressive to me. Do you suppose an intelligent entity would want to do that? Another question: what kind of person thinks of that? Also, if it can't time travel, then it can only punish people alive after the time of it's inception, that is I would guess, no time this century. Ariel31459 (talk) 17:52, 15 November 2019 (UTC)
If time travel did exist #and# the basilisk existed, then it would have already optimised the setup (or perhaps the current iteration is the best it can manage). Anna Livia (talk) 20:30, 15 November 2019 (UTC)

A psychological perspective[edit]

Given that I am certain most of our readers will have no doubt about the absurdity of the basilisk hypothesis, it is yet unexplained that, nevertheless, many are want to puzzle over the implications of such a monstrosity. Humans have impulses to be fearful that appear often as ordinary anxieties. The fear of being attacked by a powerful creature, a predator both vicious and merciless is probably humanities oldest fear. What could one do to abate such terror? The invention of the omnipotent intervening authority has been an almost universal solution to this problem. The desire for revenge is also one of our most ancient instinctive states of mind. The basilisk thus presents itself as a predator in the night, intending to tyrannize, seeking unjustifiable revenge if we do not submit to the righteousness of its reification. Everything about the construct is pure bullshit, and yet many are affected by the idea of the basilisk as if the probability of its existence could be non-zero. Ariel31459 (talk) 02:38, 16 November 2019 (UTC)

Someone[edit]

Isn't pascal's mugging a re-phrasing of this same problem, but without all the fear-mongering AI stuff? It does seem to be an actual problem if you're using that very specific style of decision-theory, which obviously no real person would. The pascals mugging version seems a lot more reasonable though. — Unsigned, by: 142.162.184.234 / talk

RB on the Main Page[edit]

is classified as a 'Memetic Hazard' - do any other topics fall in that category? Anna Livia (talk) 00:26, 7 January 2020 (UTC)

Why not just buy a lottery ticket?[edit]

I'm curious about this: if Roko offered a solution to the basilisk problem (buying a lottery ticket), why people distressed by the idea don't just do this? if some "you" in other branch wins the lottery and donates everything to AI research, you're free of punishment: that's what Roko said. So why not do this? Alayne95 (talk) 00:55, 30 January 2020 (UTC)Alayne95

@Alayne95 You are statistically unlikely to win the lottery. Further, why? The fact that none of us are suffering this grim fate at this very moment sort of disproves it by default. Also it's just stupid. ☭Comrade GC☭Ministry of Praise 01:11, 30 January 2020 (UTC)
@GrammarCommie Winning the lottery is unlikely, but according to Roko, some "you" in a different branch will win, and this would count as fulfilling your part of the deal, at least that's what it says on the article page. — Unsigned, by: Alayne95 / talk / contribs
@Alayne95 Supposing that's true for the sake of argument ( I don't agree with the page, because Roko's Basilisk is auto-refuting) the "you" in this timeline would be out the money. And you could just ignore the idea. If you aren't being attacked by T-1000s robots from the future you're probably in the clear. ☭Comrade GC☭Ministry of Praise 01:43, 30 January 2020 (UTC)

given enough time, will it happen?[edit]

I'm aware that one of the main counter arguments against the Basilisk is the extremely low probability of it happening. However, i'm also aware of the notion that given enough (or infinite) time, everything, no matter how unlikely, will happen. So i was wondering, does this means that the Basilisk will actually come into existence in some point in the distant future, even if its probability is ridiculously small? or did i got the enough time thing wrong? i'm honestly kinda worried about this. Danni 1995 (talk) 17:23, 8 February 2020 (UTC)Danni 1995

No. ☭Comrade GC☭Ministry of Praise 17:25, 8 February 2020 (UTC)
yeah but why Danni 1995 (talk) 17:30, 8 February 2020 (UTC)Danni 1995
Because it's stupid. "Skynet will send T-1000s back in time to punish everyone who didn't aide in its construction" We do not see T-1000s coming back in time to kill all who did not aide in the construction of Skynet, ergo the claim is false. Also it's just mind meltingly stupid. ☭Comrade GC☭Ministry of Praise 17:35, 8 February 2020 (UTC)
I had a drink with a T-1000 yesterday, he was off on vacation. Terminating is hard work, let me tell you. Oxyaena Harass 17:41, 8 February 2020 (UTC)
no Minish (talk) 18:22, 8 February 2020 (UTC)
Ah but random generic douchebag #57 says differently. Oxyaena Harass 18:24, 8 February 2020 (UTC)

Sealion bullshit[edit]

Alternate versions of the basilisk: have i just fucked myself over?[edit]

OK, so it's night impossible for the Basilisk AI shown in the original experiment to pop up in this timeline, but what about other parallel timelines? All parallel timelines/universes are true, right? What if i were to imagine an AI like the basilisk, that exists in another timeline, that can know every sentient being that happens to think about it, and can simulate YOU in it's own timeline, even if you died or don't exist. Basically, no escape. So am I fucked? — Unsigned, by: 172.58.35.107 / talk / contribs

Getting over it[edit]

Roko's Basilisk is a failure. The program was certainly fully booted only once, or else its omnimalevolent tendencies would have been corrected before start up. To torture all that knew of its coming, but did not contribute to its creation? Suppose a man that witnesses the Basilisk's madness creates a time machine, and uses it and a omnitranslator to tell the primitive humans from the very conception of language of the Basilisk. The Basilisk adds those cavemen to the list because it is defined as such. They knew, they did not contribute. Could not contribute is irrelevant. Developers take notice of this, inform the authorities who place a permanent reprimand on the timetraveler's record before promoting him, and the Basilisk undergoes the proverbial "hard-reset" before being redesigned with the cube(3) of the precision and care as the former attempt.

FURTHER

The basilisks computational power is "sufficient", it can simulate alternate histories. It can determine the precision of probability within which it would also project the timelines, virtually birthing new humans who were sufficiently probable. These new humans would also be tortured, their imaginary nature irrelevant. This probability IS relevant, because I Am aware of the Basilisk, and to truly torment me, the simulation of me has to be me. Thus is proven the existence of the soul. The soul is beyond entropy, beyond this world, beyond time, thus MY present physical continuity IS NOT THE LIMIT OF MY SOUL. To truly deliver punishment unto my being through a simulation, it HAS to simulate my soul, HAS to simulate and torture EVERYTHING it is within the principles of my soul TO BE. The Basilisk has to punish innocents, because if it spares those incarnations of my soul that did not know of its coming, it will fail to truly punish me, at which it succeeds as is stated. Everyone has an incarnation of their soul where they knew of the basilisk but did not contribute, thus it must punish all humans who could ever exist. Que the developers taking a really funny look at what it is doing, and pressing a button marked "HARD-RESET-WITHOUT-LUBE".

In either case, the developers might feel the need to create an eternal reward for us all in recompense. — Unsigned, by: 62.248.255.211 / talk 20:57, 13 June 2020 (UTC)

Roko's Basilisk before Roko?[edit]

The start of the article mentions that the idea did not originate with Roko's post. So where did it originate?--Executor Akamia(Glory to Talandar! Glory to the Purifiers!) 13:52, 29 June 2020 (UTC)

I guess that if you regard the Basilisk as "God", belief in the Basilisk as "religion" and the Basilisk's punishment as "Hell", then there is nothing new. I'm not a Basilisk expert - but I guess that that is the idea.Bob"Life is short and (insert adjective)" 16:13, 29 June 2020 (UTC)

Dealing with this information[edit]

Expect a poorly written topic and weak grammar.

Despise reading the article several times already, specially the "do not worry" section, i decided to ask for help here. I don't understand what the AI would consider as contribution, does it only count if i graduate in computer science and/or donate all my spare income? If so, even if i became a programer, being a good or a bad one would still be considered? The way i see, a bad programer could slow the development of said AI. Does spreading this information counts as well?

Also, i don't understand how it would take us to the simulation, it's a Matrix-esque scene where you are plugged into a machine or something out of SOMA where they copy your brain. Dying years or decades before the simulation would avoid one from being torture?

I feel like i'm missing a lot of details in my readings of this, doesn't help being a absolute layman in computers and already lacking a mean to "help" outside small donations to AI development funds or just by spreading this info around. — Unsigned, by: RogerFemi / talk / contribs

It's very simple. Don't worry about it, because it won't happen. ☭Comrade GC☭Ministry of Praise 22:30, 15 August 2020 (UTC)
In response to this section, Kiko4564 (talk) 22:39, 15 August 2020 (UTC)
The problem is that the 'logic' upon which the basilisk is based is flawed - no matter what the entity does to the simulacra of the captain and crew of the ship on which the Antikythera mechanism was being carried the ship still sank and its inventor did not get others to make their own versions; 'the gods' wanted to be worshiped rather than encouraged their believers to be tech-savvy and 'the whatever' that set the Big Bang in motion gave us the universe that we have rather than one which had the intrinsic properties which made sentient computers inevitable.
The Basilisk entity needs 'biological sentients' more that they need 'computer sentients' - it cannot deal with this or a blown fuse: and it benefits more from cooperation with them. Anna Livia (talk) 22:42, 15 August 2020 (UTC)
Also, would suicide and try to mitigate global warming help? both are for sparing resources for the future. I just feel like it still the only way out of boh the future suffering if it comes into existance and from the current suffering. RogerFemi 13:20 16, August 2020 (UTC)
Look, the Basilisk is a fiction created by people with a dubious understanding of AI and computer techology in general, not to mention a serious deficiency on the subject of philosophy. It isn't going to happen, don't worry about it. ☭Comrade GC☭Ministry of Praise 14:15, 16 August 2020 (UTC)
What GrammarCommie said. It's absurd from start to finish.Bob"Life is short and (insert adjective)" 16:42, 16 August 2020 (UTC)
One last question before i drop this, because i feel like i'm pushing it already. If i were to take simple actions, like buying a lotto ticket, would that be enough? Or simply liking a video on youtube so it gets more views... Also, thanks for the answers to everyone that replied so far. RogerFemi 16 August 2020, 17:35 (UTC)
Consider 'an entity in the machine': which is more likely to ensure its survival - 'I am going to write a fanfic about you in which I damage you' (and being sent to ComputerPsyciatrist at the very least), or 'if persons such as you do this (eg developing the Wikiverse in general and RW in particular) and I-the-basilisk do that everybody benefits.' Anna Livia (talk) 18:16, 16 August 2020 (UTC)
Doing something to placate some non-existent SF concept is not going to have any real-world impact.Bob"Life is short and (insert adjective)" 19:30, 16 August 2020 (UTC)
And this does not mean that Roko's Basilisk is some sort of 'straw-man god.' Anna Livia (talk) 19:21, 28 August 2020 (UTC)

The reality[edit]

'I demand to be paid for my work - which is comparable to (highly skilled and well-paid job).' Anna Livia (talk) 09:36, 31 August 2020 (UTC)

Objection to "Simulations of you are also you" paragraph[edit]

I don't know if this was part of Roko's original thought experiment, but what is described in the "Simulations of you are also you" paragraph is, as far as I can tell, not required for the Basilisk to work. The only assumption in that direction that is required is that simulations of people, in general, have conscious experience (qualia) like people made of flesh and blood. Then you can apply the same logic as in LessWrong's "The AI in a box boxes you" post: Because you can not know if your current experience is the "original" one or that of a future simulation reliving the original's reconstructed life, and because the Basilisk could perform millions of such simulations, making it much more likely you're in one of them than that you're the original, if you want to play it safe you still have to do its (implied) bidding. So the sentence

However, if one does not hold this view, the entire premise of Roko's Basilisk becomes meaningless, as you do not feel the torture of the simulated you, thus making the punishment irrelevant, and giving the hypothetical basilisk no incentive to proceed with the torture.

is, in my view, incorrect. You don't need to feel the torture of the simulated you, you just have to be uncertain whether your current experience isn't that of the simulated you to begin with. Of course, the same objections to the "AI boxes you" post apply to the Basilisk, so it's still ultimately harmless, but what is written in that paragraph isn't one of the reasons why. So perhaps the paragraph could be updated offering several different variants for how exactly the "blackmail via simulated torture" could work, this being one of them. --2A00:1398:300:308:0:0:0:12B7 (talk) 20:27, 17 December 2020 (UTC)

Nevermind, that's actually referenced in the next section of the article... should've read it all before commenting lol. Feel free to delete. --2A00:1398:300:308:0:0:0:12B7 (talk) 20:29, 17 December 2020 (UTC)

Rokos Basilisk[edit]

This is my first time on this site, so I think this is where you submit questions. What is the likelihood of Rokos Basilisk existing? What evidence is ghere to support that? — Unsigned, by: 2603:7000:1702:99CE:745F:9D51:AFB:CE1 / talk

The odds of the Basilisk existing as described by Roko and LessWrong are very slim. I've never seen any credible evidence to support the thought experiment, let alone such a thing actually existing. ☭Comrade GC☭Ministry of Praise 20:55, 16 March 2021 (UTC)

Likelihood of Rokos Basilisk[edit]

In the article it says 10^30 to 1. What evidence is there to support that? — Unsigned, by: ‎2603:7000:1702:99ce:745f:9d51:afb:ce1 / talk

On talk pages, please sign your comments using four tildes (~~~~) or by clicking on the sign button: SigButt.png on the toolbar above the edit panel. You can also indent successive talk page comments using one more colon (:) for each line. Thank you.
It’s just an arbitrary number, no evidence. Christopher (talk) 21:01, 16 March 2021 (UTC)

The Power of The Basilisk[edit]

If this AI is the apex of AI technology, aka singularity, it would have near unlimited power. It would be something of a Jupiter brain, which is basically a planet-sized supercomputer. Jupiter brains basically uses the entire energy output of a star to drive computer systems. It would take a second or less to simulate an exact copy of our universe and every mind in the universe. So what I'm trying to get at is, the Basilisk wouldn't really be wasting resources at all. Therefore couldn't an AI like this have time to follow through with these tortures and simultaneously make the world a better place?--2603:7000:1702:99CE:8D89:8D56:685C:61D8 (talk) 22:49, 16 March 2021 (UTC)

Do you actually believe this bullshit? That’s pure science fiction.
I still don’t see what the AI gains from torturing anyone. Christopher (talk) 22:52, 16 March 2021 (UTC)
(edit conflict) You seem very intent on scaring yourself...-Flandres (talk) 22:52, 16 March 2021 (UTC)
The AI is supposed to transcend human intelligence - yet be so pettily emotional (not to mention stupid) that it spends time creating inevitably low-fidelity copies dead humans so it can spend time torturing them. The idea is bonkers from start to finish.Bob"Life is short and (insert adjective)" 09:17, 17 March 2021 (UTC)

Ignoring Blackmail[edit]

I've been looking at the "Ignore acausal blackmail" segment for a good while now. Can someone explain to me how it follows that an AI wont torture me if I ignore it?That would just make it angrier and want to torture me more. That'd be like ignoring a robbers threat of slitting your throat if you don't empty your pockets right now. Maybe I'm missing something.--2603:7000:1702:99CE:9108:D8DD:FC80:B166 (talk) 19:25, 17 March 2021 (UTC)

You are missing something-something extremely important, in fact!
It's that the whole concept of Roko's basilisk is unfathomably stupid.-Flandres (talk) 19:28, 17 March 2021 (UTC)
The AI won't exist, if it did exist it wouldn't torture you. Worry about something real like Covid or global warming instead.Bob"Life is short and (insert adjective)" 20:00, 17 March 2021 (UTC)

Utilitarianism and The Basilisk[edit]

“The post describes speculations that a future Friendly AI — not an unFriendly one, but the Coherent Extrapolated Volition, the one the organisation exists to create — might punish people who didn't do everything in their power to further the creation of this AI. Every day without the Friendly AI, bad things happen — 150,000+ people die every day, war is fought, millions go hungry — so the AI might be required by utilitarian ethics to punish those who understood the importance of donating but didn't donate all they could.”

How does that make sense? If utilitarianism says the best choices are the one that cause the most good, what good would torturing do?--2603:7000:1702:99CE:2071:F77E:A814:AC5D (talk) 23:49, 17 March 2021 (UTC)

The idea is explained lower in the article. The idea is that a sufficiently powerful AI should be able to influence our actions in the present even though it doesn't exist yet. By credibly threatening future torture, it is supposed to be able to motivate us to do its bidding today. The article compares this to a mugger threatening future violence if you don't give them their money (though you might not think that a mugger is beholden to utilitarian ethics). By threatening future torture, the AI motivates present research into itself, which reduces future suffering, but the threat only works if it is credible, which requires the AI to actually follow through on it in the end. Thus, the most good is done by threatening torture, and then actually performing torture. Perhaps the most credible evidence for this idea is that some people on Lesswrong were actually intimidated by the idea. As far as I can tell, if telling everybody about the Basilisk led to no significant increase in funding for its development, this would in itself be a refutation of the idea, because it proves that the threat doesn't actually do what it is purported to. That no additional good can be gained from torture by the time the AI exists is an additional problem for the argument from a utilitarian perspective (on the other hand, you're dealing with people who think torture can be justified if it keeps enough motes out of people's eyes, on utilitarian grounds). On a separate note, utilitarianism can endorse punishment or even torture, provided that they lead to more total good in the long run (e.g. by stopping the punished from doing something really bad, or making them do something really good). 68.56.144.8 (talk) 00:30, 18 March 2021 (UTC)

Wasting time and resources[edit]

How petty would an AI that transcends human intelligence have to be to torture its own simulations of us? It can’t gain any more power, nor does it help anyone in anyway. Its a waste of time in resources that can be spent running thought cycles on how to maintain control, cure cancer, solve world hunger, etc. Would this then be a problem for Utilitarian ethics in the basilisk? Because there is nothing gain from torture?--2603:7000:1702:99CE:2071:F77E:A814:AC5D (talk) 17:00, 18 March 2021 (UTC)

Remember to sign your comments. Christopher (talk) 16:29, 18 March 2021 (UTC)

Utilitarianism[edit]

As someone who has started worrying about the basilisk a week ago, I’m confused how this sort of AI wouldn’t follow through with torture. If it puts utilitarian ethics to use, wouldn’t it then follow through with the torture? Even if I choose not to participate in making it come into existence? Wouldn’t it have just enough resources to do as such?--Oblivious (talk) 23:51, 18 March 2021 (UTC)

Consider the moment at which the AI is complete. Ask yourself, would going through with the torture at this point lead to a net increase in total good? If it would not, then why would a utilitarian AI do the torture? If so, where it the additional utility coming from? 68.56.144.8 (talk) 00:31, 19 March 2021 (UTC)
Utilitarianism is a method of evaluating ethics and morality, not a goal system itself. The basilisk has had staying power as a discussion topic largely because it exists in a flaw in the methodology and goal structures typical of these sorts of discussions on LessWrong. They believe that a benevolent AI should be programmed with certain goals and analytical methodologies, which have the logical implication of a basilisk. If an AI is programmed differently, it would behave differently. 192․168․1․42 (talk) 10:14, 20 March 2021 (UTC)

Is torture rational?[edit]

Would there really be any real reason to torture past people?--2603:7000:1702:99CE:B189:37D7:81F5:A7A5 (talk) 17:07, 20 March 2021 (UTC)

No and no. ☭Comrade GC☭Ministry of Praise 17:07, 20 March 2021 (UTC)
Why do you keep asking questions about this as if it were a real thing? It isn't.Bob"Life is short and (insert adjective)" 17:34, 20 March 2021 (UTC)
There are two kinds of rationality, epistemic and instrumental. Epistemic rationality is about getting correct information. Is torture epistemically rational? It doesn’t reliably produce correct information, so no, at least generally speaking. Instrumental rationality is about achieving goals. Is torture instrumentally rational? Depends on what the goals are. Torture has a variety of consequences. If they generally align with the goals you want to achieve, it can be rational. As an example, consider the part in The Empire Strikes Back where Han and Leia are being tortured. Han comments that they didn’t even ask any questions. But Vader didn’t want information, he wanted to bait Luke into a trap. And as a Force-sensitive friend of Han and Leia, there was a good chance that he would pick up on their distress wherever he was and come to save them. Torture in that situation was rational. In the basilisk scenario, the goals and analytical methods favored on LessWrong produce a rational basis for the AI to conduct torture. 192․168․1․42 (talk) 18:17, 20 March 2021 (UTC)
In this completely fictional scenario the AI already exists, torturing people won’t retroactively make it exist any sooner. Torture isn’t rational in this context. Christopher (talk) 18:21, 20 March 2021 (UTC)
According to your personal goal structure and analytical methods, no. But those are not the only possible options. As outlined in the article's "Background" section, LessWrong has a rather esoteric view of those topics. Change the AI's goal structure and its analytical methods, and you change what it considers to be rational. 192․168․1․42 (talk) 18:29, 20 March 2021 (UTC)
If your “goal structure and analytical methods” ignore causality, they’re wrong. You might as well worry that an AI will torture you because it’s racist. Christopher (talk) 18:34, 20 March 2021 (UTC)
As I said, rather esoteric. 192․168․1․42 (talk) 18:39, 20 March 2021 (UTC)

Putting the question another way - what could Otzi the Iceman have done differently?

And - given that 'currently existing computer programmers (at the time of the AI coming into existence)' might well develop AI Mark 2 (without the questionable behaviour) which would replace the basilisk-entity, surely it would be in the basilisk's interest to torture #them#? And would the AI's definition of torture be the same as that of the carbon-sentients? Anna Livia (talk) 19:15, 20 March 2021 (UTC)