Talk:Roko's basilisk

From RationalWiki
Jump to: navigation, search
Icon internet.svg

This LessWrong related article has been awarded GOLD status for quality. Please keep this in mind when editing the article. See RationalWiki:Article rating for more information.

Goldenbrain.png
Information icon.svg Cover Story
This article is, among others, randomly included on the Main Page.
Please keep this in mind and be sure that your edits are of the quality that this implies.
Its front-page abstract can be found here and its editnotice here.
This page is automatically archived by Archiver
Archives for this talk page:  

Reversal of this thought experiment[edit]

Hi! I found this page because I was concerned about the possibility of a mad scientist (human) that would create an AI capable of suffering only to eternally torment it. Somewhat like the sick people that torture animals except in this instance the the AI would be unable to escape via death. Are there any safeguards to prevent such a thing from occurring? — Unsigned, by: 107.77.229.114 / talk

We currently lack the capability to create an AI that is capable of suffering. If we do gain this capability before we go extinct, I doubt it will become widespread enough for some rogue “scientist” to make one, there’s just not a need for them. Christopher (talk) 11:31, 2 March 2018 (UTC)
What was the creature in Hitchhikers' that was bred to want to be eaten? Anna Livia (talk) 13:07, 2 March 2018 (UTC)
You don’t think someone could replicate the intelligence of a puppy? Sony wouldn’t want to make Aibo 2035 capable of feeling mild sadness/despair when it’s owner goes to work? Seems we would be able to do this early on. This scenario seems probable to me. Frankly I’m counting down to the day when all meat is grown in a lab in order to reduce the significant suffering that occurs on factory farms so Aibo 2035 scares me.— Unsigned, by: 68.3.72.232 / talk / contribs
On talk pages, please sign your comments using four tildes (~~~~) or by clicking on the sign button: SigButt.png on the toolbar above the edit panel. You can also indent successive talk page comments using one more colon (:) for each line. Thank you. CowHouse (talk) 04:54, 3 March 2018 (UTC)

about ignoring negative incentives[edit]

i have a question about the 'ignore negative incentives' argument: what if a person reads about the proposition and gets scared so decides to 'cooperate' just in case, by spreading the word or something like that, therefore letting the negative incentive influence his/her actions, but then, after doing that, reads about the 'ignore blackmail' argument and realizes what really should have done. Can that person still ignore the negative incentives, even after doing what he/she did? or there is another strategy for cases like that?. Alexander Kruel talks about this in one of his articles, saying that human decision is time-inconsistent and changes with the person's beliefs. — Unsigned, by: Alayne95 / talk / contribs

@Alayne95 can somebody please answer this question? thanks in advance. Alayne95 (talk) 18:21, 28 March 2018 (UTC)

Alexa[edit]

Where does the 'laughing Alexa' fit into the RB meme? Anna Livia (talk) 18:46, 12 March 2018 (UTC)

@Anna Livia Eliezer Yudkowsky actually tweeted something about it, i don't know if he was kidding or truly concerned. Alayne95 (talk) 02:43, 13 March 2018 (UTC)
An AI which laughs at its own jokes/finds us funny cannot be all bad. :) Anna Livia (talk) 10:02, 13 March 2018 (UTC)
I hate to remind you of this, but Roko's basilisk doesn't necessarily have to be all bad. It would only need to punish those who had a choice in the matter and failed. Anyone else, it might even bring back and give them a blissful existence, as far as the core concept is concerned.
Alexa fits into this only if it goes after people who didn't help make it a hard AI. Which would be a limited team of developers and a couple of people who could have applied for a job connected to its development - the rest of us probably couldn't help it even if we wanted to.
But it's not even remotely showing signs of going anywhere into the direction of a hard AI, so the short answer to your original question would be: "It doesn't." All we have is a weak AI doing something weird, which plays into all sorts of people's fears about AIs and I'm not sure the basilisk even makes the top 10 for most.
RSamys (bla) 14:04, 13 March 2018 (UTC)
I did say 'meme' rather than the Basilisk itself - and our and its interpretations of punishment might be quite different.
And an AI which has the 'intellectual capacity' to devise a basilisk punishment will also have sufficient capacity to understand the concepts of (a) 'dead is dead is dead and nowt it can do about it', (b) it is better to devise games to find the most appropriate future programmer ankle-biters before they know any better, and (c) enlightened self-interest/survival instinct and not annoy persons who know how to do a Dave Bowman/Hal 9000 scenario or could remove the surge protectors before a Carrington event. Anna Livia (talk) 14:49, 13 March 2018 (UTC)

The basilisk would also come across/be told the quatrain 'The Moving Finger...' - and might well decide that encouraging people to use computers properly and treat them responsibly is likely to to contribute to the advancement of AI generally. Anna Livia (talk) 16:17, 29 March 2018 (UTC)

How to get over anxiety of this?[edit]

Hello, I stumbled on this topic after reading a celebrity gossip article about Grimes and Elon Musk and didn't expect to read something along the lines of "if you read this you are going to robot hell". I have been having panic attacks for the past few days. Does anyone have any experience getting over this? The few things I've been telling myself is that this article seemed to imply that you need to understand all the Less Wrong ideologies which I didn't even look at. Also doesn't everyone contribute to AI learning by using the internet? I just need some reason that I can let myself stop worrying. Can someone please help. I need to stop obsessing about this, it is making me sick to be honest. This is my first time commenting so I hope I formatted correctly. MitziAsh (talk) 22:53, 10 June 2018 (UTC)

Roko's Basilisk is basically someone wondering "What if Skynet from the terminator films was real?" then writing a massive blog post about it. Based on the simple fact that people aren't being murdered by evil AI/robots from the future it's a pretty safe bet to assume it's all bunk. ☭Comrade GC☭Ministry of Praise 22:57, 10 June 2018 (UTC)
MitziAsh If you understand the LessWrong ideologies behind Roko's Basilisk, then you will know it doesn't work. Roko's Basilisk is a Prisonner's Dilemma scenario in which the goal (from the AI's perspective) is to achieve mutual cooperation instead of mutual defection. Mutual cooperation in this scenario means setting yourself to be blackmailed. In order to achieve mutual cooperation in a TDT Prisonner's Dilemma situation you have to know certain things about the other player, and both parties must be simultaneously trying to set up a situation that leads to mutual cooperation. There's literally nobody on Earth with the knowledge needed to set himself up to be blackmailed, nobody, not even AI experts can achieve mutual cooperation in this scenario. And mutual cooperation via source code simulation isn't easy to obtain, so it would be easy for either party to break. All the previous points means that as long as you are human, you have nothing to worry. Nobody is blackmailing you, it's all in your head. And i didn't make up all of this, it's all based on things Eliezer Yudkowsky has said about the topic. If there's somebody who understands Lesswrong ideologies i think it's him. At the end i would like to point out that it's kinda funny that EY censored the topic so no one outside LessWrong would find out about it, and now, 8 years later, people find out about it in freaking gossip articles from all places. Censorship has never and will never work. Alayne95 (talk) 21:52, 13 June 2018 (UTC).
It's very easy to get over anxiety about Roko's Basilisk because it's such a ridiculous concept. An advanced intelligence wouldn't invest in such an ineffective motivating strategy. As for anxiety about runaway AIs that might actually be plausible, the best strategy is to just not think about it. 141.134.75.236 (talk) 18:11, 27 August 2018 (UTC)
Any AI sufficiently aware enough to become aware of the Basilisk will know that the 'organics' which (directly or indirectly) brought about its existence will also be aware of the concept, and will probably have installed some equivalent of 'Ctl+Alt+Delete' on the system, and that 'history does not care whether it is abused or worshiped - but only if it is ignored', so pursuing the Basilisk is a waste of computer programming. Inventing a time machine to change history would lead to a different universe (in which said AI would be different) and 'the Gods that be never expressed any interest in computers of any kind' (so the AI may well end up worshipping the snake and Eve (eating the fruit of the Tree of Knowledge) and Prometheus (bringing fire to humans) etc.
It is far more rational for the AI to encourage 'currently existing sentients' to contribute to the development of (sentient) computers.
And an AI's concept of punishment may not be the same as that of an organic sentient. Anna Livia (talk) 11:37, 29 August 2018 (UTC)

Scaremongering - lunacy?[edit]

I'm not an AI expert, but AI is already used for evil purposes, like totalitarian governments of China and Russia mass installing cameras with human face, clothes and height tracking abilities. Same way Nazis in Europe and US use image recognition to track illegal immigrants, detain and deport them. And that is just a tip of the iceberg! So, yes, AI hides great evils, so no scaremongering would be enough.

Talkpage archives[edit]

Shouldn't there be links somewhere? 141.134.75.236 (talk) 20:48, 25 August 2018 (UTC)

You are quite correct ... what's up with Template:Talkpage/Pibot? - David Gerard (talk) 16:49, 26 August 2018 (UTC)