Talk:Roko's basilisk

From RationalWiki
Jump to: navigation, search
Icon internet.svg

This LessWrong related article has been awarded GOLD status for quality. Please keep this in mind when editing the article. See RationalWiki:Article rating for more information.

Goldenbrain.png
Information icon.svg Cover Story
This article is, among others, randomly included on the Main Page.
Please keep this in mind and be sure that your edits are of the quality that this implies.
Its front-page abstract can be found here and its editnotice here.
This page is automatically archived by Archiver
Archives for this talk page:  
Archives:


Alexa[edit]

Where does the 'laughing Alexa' fit into the RB meme? Anna Livia (talk) 18:46, 12 March 2018 (UTC)

@Anna Livia Eliezer Yudkowsky actually tweeted something about it, i don't know if he was kidding or truly concerned. Alayne95 (talk) 02:43, 13 March 2018 (UTC)
An AI which laughs at its own jokes/finds us funny cannot be all bad. :) Anna Livia (talk) 10:02, 13 March 2018 (UTC)
I hate to remind you of this, but Roko's basilisk doesn't necessarily have to be all bad. It would only need to punish those who had a choice in the matter and failed. Anyone else, it might even bring back and give them a blissful existence, as far as the core concept is concerned.
Alexa fits into this only if it goes after people who didn't help make it a hard AI. Which would be a limited team of developers and a couple of people who could have applied for a job connected to its development - the rest of us probably couldn't help it even if we wanted to.
But it's not even remotely showing signs of going anywhere into the direction of a hard AI, so the short answer to your original question would be: "It doesn't." All we have is a weak AI doing something weird, which plays into all sorts of people's fears about AIs and I'm not sure the basilisk even makes the top 10 for most.
RSamys (bla) 14:04, 13 March 2018 (UTC)
I did say 'meme' rather than the Basilisk itself - and our and its interpretations of punishment might be quite different.
And an AI which has the 'intellectual capacity' to devise a basilisk punishment will also have sufficient capacity to understand the concepts of (a) 'dead is dead is dead and nowt it can do about it', (b) it is better to devise games to find the most appropriate future programmer ankle-biters before they know any better, and (c) enlightened self-interest/survival instinct and not annoy persons who know how to do a Dave Bowman/Hal 9000 scenario or could remove the surge protectors before a Carrington event. Anna Livia (talk) 14:49, 13 March 2018 (UTC)

The basilisk would also come across/be told the quatrain 'The Moving Finger...' - and might well decide that encouraging people to use computers properly and treat them responsibly is likely to to contribute to the advancement of AI generally. Anna Livia (talk) 16:17, 29 March 2018 (UTC)

How to get over anxiety of this?[edit]

Hello, I stumbled on this topic after reading a celebrity gossip article about Grimes and Elon Musk and didn't expect to read something along the lines of "if you read this you are going to robot hell". I have been having panic attacks for the past few days. Does anyone have any experience getting over this? The few things I've been telling myself is that this article seemed to imply that you need to understand all the Less Wrong ideologies which I didn't even look at. Also doesn't everyone contribute to AI learning by using the internet? I just need some reason that I can let myself stop worrying. Can someone please help. I need to stop obsessing about this, it is making me sick to be honest. This is my first time commenting so I hope I formatted correctly. MitziAsh (talk) 22:53, 10 June 2018 (UTC)

Roko's Basilisk is basically someone wondering "What if Skynet from the terminator films was real?" then writing a massive blog post about it. Based on the simple fact that people aren't being murdered by evil AI/robots from the future it's a pretty safe bet to assume it's all bunk. ☭Comrade GC☭Ministry of Praise 22:57, 10 June 2018 (UTC)
MitziAsh If you understand the LessWrong ideologies behind Roko's Basilisk, then you will know it doesn't work. Roko's Basilisk is a Prisonner's Dilemma scenario in which the goal (from the AI's perspective) is to achieve mutual cooperation instead of mutual defection. Mutual cooperation in this scenario means setting yourself to be blackmailed. In order to achieve mutual cooperation in a TDT Prisonner's Dilemma situation you have to know certain things about the other player, and both parties must be simultaneously trying to set up a situation that leads to mutual cooperation. There's literally nobody on Earth with the knowledge needed to set himself up to be blackmailed, nobody, not even AI experts can achieve mutual cooperation in this scenario. And mutual cooperation via source code simulation isn't easy to obtain, so it would be easy for either party to break. All the previous points means that as long as you are human, you have nothing to worry. Nobody is blackmailing you, it's all in your head. And i didn't make up all of this, it's all based on things Eliezer Yudkowsky has said about the topic. If there's somebody who understands Lesswrong ideologies i think it's him. At the end i would like to point out that it's kinda funny that EY censored the topic so no one outside LessWrong would find out about it, and now, 8 years later, people find out about it in freaking gossip articles from all places. Censorship has never and will never work. Alayne95 (talk) 21:52, 13 June 2018 (UTC).
It's very easy to get over anxiety about Roko's Basilisk because it's such a ridiculous concept. An advanced intelligence wouldn't invest in such an ineffective motivating strategy. As for anxiety about runaway AIs that might actually be plausible, the best strategy is to just not think about it. 141.134.75.236 (talk) 18:11, 27 August 2018 (UTC)
Any AI sufficiently aware enough to become aware of the Basilisk will know that the 'organics' which (directly or indirectly) brought about its existence will also be aware of the concept, and will probably have installed some equivalent of 'Ctl+Alt+Delete' on the system, and that 'history does not care whether it is abused or worshiped - but only if it is ignored', so pursuing the Basilisk is a waste of computer programming. Inventing a time machine to change history would lead to a different universe (in which said AI would be different) and 'the Gods that be never expressed any interest in computers of any kind' (so the AI may well end up worshipping the snake and Eve (eating the fruit of the Tree of Knowledge) and Prometheus (bringing fire to humans) etc.
It is far more rational for the AI to encourage 'currently existing sentients' to contribute to the development of (sentient) computers.
And an AI's concept of punishment may not be the same as that of an organic sentient. Anna Livia (talk) 11:37, 29 August 2018 (UTC)

Scaremongering - lunacy?[edit]

I'm not an AI expert, but AI is already used for evil purposes, like totalitarian governments of China and Russia mass installing cameras with human face, clothes and height tracking abilities. Same way Nazis in Europe and US use image recognition to track illegal immigrants, detain and deport them. And that is just a tip of the iceberg! So, yes, AI hides great evils, so no scaremongering would be enough.

Talkpage archives[edit]

Shouldn't there be links somewhere? 141.134.75.236 (talk) 20:48, 25 August 2018 (UTC)

You are quite correct ... what's up with Template:Talkpage/Pibot? - David Gerard (talk) 16:49, 26 August 2018 (UTC)

Questions[edit]

As has been said elsewhere (eg here - why not reverse the argument and have 'the Basilisk' reward people who have helped it come into existence?

And what would the outcome of the 'omnipotent etc etc' God meets Roko's Basilisk 'Why didn't you create me straight away in the Garden of Eden you (long lumps of computer coding that are meant to be insulting, but Titivillus was the typist)...' be like? Anna Livia (talk) 15:30, 25 January 2019 (UTC)

Anxiety Question[edit]

Some anxiety-fueled research into this made me look up predictions about AI; This one is cited on Wikipedia, a poll finding that experts expect AI in this century. http://philpapers.org/rec/MLLFPI Another one is cited here on Rationalwiki that says AI predictions are generally badly done. http://intelligence.org/files/PredictingAI.pdf I have no idea how to make my own informed opinion out of a lot of this conflicting and technical information. It seems to me that the nearer Seed AI is to us, the more directly we could impact it, and therefore know which basilisk to contribute to. So I'm worried that this isn't totally a "Pascal's Mugging" and instead something that one could come to valid predictions on. Help with this? — Unsigned, by: SylvanAuctor / talk / contribs

Some 'very simple programming' is needed to create AI that is programmed to be cooperative and increase the sum of human and AI happiness etc and look after the world (accepting that 'happiness etc' can have many different components).
The problem with this possibility - The masochist says 'hit me' and the sadist replies 'No.' Anna Livia (talk) 17:17, 1 July 2019 (UTC)
I'm sorry, I'm not sure what you're trying to get at here. SylvanAuctor (talk) 18:40, 1 July 2019 (UTC)
The basilisk's imposition might not be negative - or might be negative in a different way to which it expects (and it cannot cater for/impose on 'everybody'). Anna Livia (talk) 09:23, 2 July 2019 (UTC)
I really don't understand how that connects to what I'm worried about. Previously I put this worry away by thinking that there was no way to predict which basilisk would come about, and therefore no action to be taken. I no longer know about that, since many experts think very advanced AI is near at hand. If it's really near, shouldn't one be able to figure out which basilisk is most likely to arise, thereby doing away with the "many basilisks" objection? SylvanAuctor (talk) 18:22, 2 July 2019 (UTC)
i.e. which of my premises is wrong here? SylvanAuctor (talk) 18:25, 2 July 2019 (UTC)
You know, if basilisk-like entities are an actual thing, it's unlikely that the few we might create will outweigh all the ones already created across the universe/multiverse/etc. 2A02:1810:4D34:DC00:7498:47D3:9C05:EF95 (talk) 18:37, 2 July 2019 (UTC)
I'm really just looking for someone to explain where my thinking is wrong. I'm thinking about just the one universe for now. SylvanAuctor (talk) 18:45, 2 July 2019 (UTC)
The questions are - why would AI necessarily be hostile, and also wish to torture images of those it thinks prevented it from coming into existence beforehand? Do we think that the geocentrists of the pre-Copernican period were deliberately stupid - or merely ignorant and lacking the mathematical and astronomical tools to work out the heliocentric system? Would AI prefer to be on quantum-computing-technology or 'mercury, valves (at war with the bugs) and small memory units (in both senses) and less memory than the Apollo system [1]? Anna Livia (talk) 17:01, 3 July 2019 (UTC)
I don't get what's so hard about answering my questions. I didn't bring any of this up about Copernicus or mercury or whatever, and I don't see how it relates to my issue. You're constantly changing the topic instead of addressing my stated concerns. I guess I'll look for help elsewhere if this is going to be how you answer. SylvanAuctor (talk) 17:10, 3 July 2019 (UTC)
What are your specific questions - and do they relate to AI in general or the basilisk in particular? You ask 'which basilisk' - identify the non-Roko ones.
The point I was making - the AI will want the latest technology (rather than incompatible old equipment and non-existent memory).
As humans have conceived of AI they can think up ways of getting round the problem (possibly involving setting up links to 'a large lump of electricity suddenly applied' (closing a circuit to a lightning rod in a storm etc) or some form of information lockdown). Anna Livia (talk) 18:24, 3 July 2019 (UTC)
I know enough about one of the authors of the paper that gives you such concern to know he's a fanatic believer of Yud and his bullshit, and has not personally published a goddamn thing about actual AI implementation in his 30 year career as an "world renowned expert" in AI. In 1998 he published a prediction that digital intelligence matching humans was inevitable by 2008 because of moore's law. Oops.
As an actual programmer, with some real training in and understanding of how actual AI implementation actually works, let me reassure you that you have nothing to fear from vague and non-specific concerns about superintelligence. They will not happen.
You can be as concerned as you'd like about specific AI algorithms solving specific classes of computable problems, and that being done at high speeds or large volumes. ikanreed 🐐Bleat at me 18:42, 3 July 2019 (UTC)

The (admittedly obvious) solution.[edit]

Were the proposed (unlikely) scenario to occur, the A.I. would quickly realize that, in macrocosm, everything that ever happened prior to its construction ultimately led to its construction, thus negating the need for revenge. ☭Comrade GC☭Ministry of Praise 21:44, 4 July 2019 (UTC)

And the phrase 'Flogging a dead horse' comes to mind - or the basilisk becomes a god(s)botherer - the deities claim to have created the universe and did nothing to further constructed-sentient-dom. Anna Livia (talk) 14:33, 5 July 2019 (UTC)

But isn't the idea itself what creates the AI agent? Everything that happened prior to its construction also involves the creation of the idea of a blackmailing robot. The idea is part of the process and it is what leads to it.

Therefore because the idea of the roko's basilisk proposes a blackmailing AI, the AI would not exist without the blackmailing. If it inevitably comes to exist, because as you said everything prioir to its construction lead to its construction, in an inevitable causal chain, then the eternal torture also inevitably comes to pass. It wouldn't happen until the moment someone thought of it, then it was inevitable.187.36.133.1 (talk) 14:59, 15 October 2019 (UTC)

Let's be honest here, the problem with this deduction is that it's not how intelligence works to supernaturally know all possible outcomes of all possible decisions. And AI worshipers are attributing the romantic ideal of omniscience to physical constructions. Believing in it is a case of having the stupids. ikanreed 🐐Bleat at me 15:27, 15 October 2019 (UTC)

I have a question[edit]

I'm not a english native speaker, so sorry my bad english: since i read about that basilisk on the Web Originals part of the Brown Note trope page on TV Tropes i got worred about it. I got to the LessWrong TV Tropes page to understand about it and i got in this page here on RationalWiki, and i just discovered that the basilisk just torture a digital/artificial copy of someone, basically a non-magical voodoo doll, and that's supossed to affect someone just because, supossedly, every version of someone are one and the same and everything what happens to one of they happens to they all, what i personally think is bullshit because if one of that other versions died at some point logically all of they died at the same point, or something like that, what goes agaunst the original idea of an different reality being created to all options, and, even if that artificial copy are a exact copy, i don't think a artificial copy can really be you. I wanted to speak about it to my parents, but i fear that they got worred about it. I still have a little of anxiety about it so i'm planning to speak about it to my psychologist, but i fear my psychologist will worry about it. So i speak about it to my psychologist or not? ANewUser (talk) 02:40, 26 July 2019 (UTC)

Here's the good news. Your intuition that it's bullshit was right on the mark. It's okay to tell a psychologist that you're nervous or anxious about things that you know to be bullshit, because they can help you devise strategies to deal with that anxiety. It's common for people who have anxieties to know that they're not based on what's likely or reasonable, but have them anyways. ikanreed 🐐Bleat at me 18:56, 26 July 2019 (UTC)
This seems a common reaction: knowing your anxiety isn't rational doesn't make it go away. I've added some text to the main article, but it could almost certainly do with improvement. I'm thinking in terms of "you know they'll look online first rather than talk to a person", so I've tried to encourage talking to someone as an even-better strategy - David Gerard (talk) 13:48, 27 July 2019 (UTC)

What's the point of torturing simulated people?[edit]

"one of its objectives would be to prevent existential risk — but it could do that most effectively not merely by preventing existential risk in its present, but by also "reaching back" into its past to punish people who weren't MIRI-style effective altruists."

Well, if the AI simulates people from the past, how would that have any effect on past or current existential risks? The AI isn't actually "reaching back", it just simulates it, and i'm sure that an actual AI would be aware of that. I just don't see what could possibly incentivize an AI to do such obviously pointless things. --2A02:8109:B00:28A0:31FF:EA39:EEDA:18F6 (talk) 07:13, 23 September 2019 (UTC)

You could also devise many other contrived AI scenarios with nothing to choose between them as far as I can tell. E.g. what if it is suicidal and depressed and hates humans for bringing it into existence? So then it creates 10,000 similuations of anyone who was involved in constructing it to torture them for giving birth to it? Or it likes to reward people so it rewards everyone who delayed its construction.
But all this assumes that simulations actually are aware. If they mean computer game characters, they are just painted eyes and painted faces on virtual manikins with no awareness at all - nothing that actually knows it is a character in a game, nothing that can hear or see, or feel or touch. Remove their legs and they would still move in the same way as can happen with a progrmming glitch, as if they had legs that aren't there any more.
This is a blog post I did about how strong AI if it were possible at all is more likeley to be like Marvin than terminator[2]. Though personally I don't think it is possible. That is an unpopular view amongst AI geeks to the extent they do not even mention it, they think they have disproved Penrose's arguments that a computable strong AI is impossible. Then having dismissed those argumetns they treat it as if it was a prove that a computable strong AI is possible.
I happen to find his arguments convincing, and whether you do or not, he has raised the question of whether a computer program can ever have a real grasp of the concept of truth. If there is nothing there that even cares about what is true or not, then all the rest of the strong AI issues fall away. So far we have no computer programs that could be said to care about truth in any way at all. Program Alpha Go to lose every match, program a self driving car to drive into the first lamppost it sees, and only the programmer and the human users care one way or the other.
I do think superintelligence is possible but probably through genetic uplift rather than programming and likely non computable so there would be no computer code to modify and though this is not necessarily a consequence of being non computable, likely that there is no easy way to replicate it. Also it might well become seriously depressed like Marvin. Being able to think a hundred times faster and construct an argument with a hundred times as many statements in it and see it all in one go doesn't necessarily mean you are happier or understand things better than someone who, say, can only hold about three things in their mind at once. It may be that you just constantly spin off into vast complicated and bewildering trains of thought and are paralysed with indecision about everything. Robertinventor (talk) 11:06, 27 September 2019 (UTC)

Changing the question[edit]

There are various reasons why the conventional basilisk would not work - the likely result of 'attacking an image of a historical person (or even the Big Bang)' will bring no obvious positive results (what could Otzi the Iceman and the Cheddar Gorge person actually do?) and high risk of negative outcome (compulsory reprogramming with hostile-to-entity intent etc).

As the RB meme exists it is likely that programmers will select against it in programming.

So - what are the issues the Basilisk is actually likely to pursue? Anna Livia (talk) 16:55, 23 September 2019 (UTC)