Talk:Roko's basilisk/Archive2

From RationalWiki
Jump to navigation Jump to search

This is an archive page, last updated 3 May 2016. Please do not make edits to this page.
Archives for this talk page:  , (new)(back)

Utilitarianism - complete mess[edit]

The bullet point mentioning utilitarianism is currently a complete mess.

  • First of all utilitarianism in general has nothing to do with the basilisk that I can see, one way or another. Why would e.g. a deontologist or a virtue-ethicist believe it (or be affected by it) more or less?
  • Secondly, I think EY has described himself as an "average" utilitarian, not a "total" utilitarian -- nor is it clear why a distinction between total and average utilitarianism would be significant in regards to believing or disbelieving the basilisk anyway, even if utilitarianism itself was important.
  • Thirdly, the "dust specks" vs "torture" scenario has nothing to do with multiplying negligible *probabilities*, it has to do with multiplying tiny but definite (and vastly numerous) disutilities. And in "shut up and multiply" Eliezer is talking about non-negligible probabilities, not negligible ones. In short neither of the two articles linked discusses multiplying *negligible probabilities*.

Aris Katsaris (talk) 14:08, 1 March 2013 (UTC)

Some general caveats regarding utilitarianism (not necessarily in order):
  • There is no formal definition of what constitutes a "non-negligible probability".
  • If you have limited computational resources you are forced to discard hypotheses using crude heuristics.
  • The "shut up and multiply" mantra might be theoretically correct but is practically dangerous for humans because it can be used to disqualify strategies such as the absurdity heuristic and to demand extraordinary evidence given extraordinary claims, strategies which are our most important line of defense against falling prey to our own shortcomings and inability to discern fantasy from reality.
  • It is very dangerous and misleading for computational bounded agents such as humans to use inference based probability estimates, as opposed to probability estimates based on empirical evidence, and multiply them by arbitrarily huge made up values that are supposed to represent how much you desire each possible outcome.
  • Ignoring certain ideas on the basis of vast amounts of conjectured negative utility is not going to work out in practice if only for the reason that it would make everyone unable to debunk nonsense and people who believe nonsense would be forever stuck believing it.
  • The decision-making of any agent build according to our current grasp of rationality is eventually going to be dominated by extremely small probabilities of obtaining vast utility because an expected utility maximizer precommits to choosing the outcome with the largest expected utility. Which will ultimately make it privilege unlikely high-utility outcomes over much more probable theories that are based on empirical evidence. All that has to happen is for it to stumble upon a hypothesis implying vasts amounts of utility, like e.g. time travel or hacking the Matrix. The implications can easily outweigh even very low probability estimates.
  • For an expected utility maximizer there is no minimum amount of empirical evidence necessary to extrapolate the expected utility of an outcome and the extrapolation of counterfactual alternatives is unbounded, logical implications can reach out indefinitely without ever requiring new empirical evidence.
  • Consequentialism is only a theoretic possibility anyway. It doesn't make much difference if you are a human or a Jupiter brain because possible long term detriments become computationally intractable for any physical agent.
  • Our values are not static.
  • Humans are time-inconsistent decision makers due to changing values and beliefs.
  • There exists no agreeable definition of “self”.
  • Torture vs. Dust Specks: Regardless of the amount of people who become irritated for the fraction of a second, it does not add up to torture of even a single person. There exists no conversion factor. You can't just trade some commodity for another on a randomly chosen market by increasing the amount of the commodity. Some commodities simply don't add up. Almost nobody is going to trade a novel for a random sequence of characters, never mind the size of the sequence.
- XiXiDu (talk) 14:56, 1 March 2013 (UTC)
As I said "Shut up and multiply" doesn't connect to negligible probabilities - discussion of negligible probabilities is in e.g. the "Pascal's mugging" discussion, where EY's (and most LWers, and mine too) answer is obviously that we should NOT multiply the negligible probability with the vast disutility, though there exists difficuly to translate this insight to a formalized decision theory. As for torture vs dust specks, I'm firmly on the side favouring torture rather than the side favouring sticking dustspecks in 3^^^3 people's eyes -- and I've been such for a long time, long before visiting LW. My own (pre-LW days )formulation would be whether it'd be better to stop a single murderer, or to stop all the spammers and computer virus-writers in the world. Aris Katsaris (talk) 15:26, 1 March 2013 (UTC)
And as a sidenote, can't you at least try to be concise? Are you objecting to anything I said? I wasn't even defending utilitarianism in my own comment, I just said I don't see that it significantly relates to the basilisk. Aris Katsaris (talk) 15:32, 1 March 2013 (UTC)
I object to e.g. your use of "negligible probability" when you are unable to formalize it. If you are talking about differing intuitions instead then why not just say so?
Sticking dustspecks in 3^^^3 people's eyes does distribute a horrible fate among so many people that the negative effect is sufficiently diluted. I bet that the vast majority of those people, when asked, would accept a dustpeck in their eye to prevent the torture of someone. Only if you conjecture that such a question is asked sufficiently often as to cause those people to suffer a horrible fate themselves would it be justified to subject a being to torture.
Anyway, the whole line of reasoning is incredible dangerous because humans are not equipped for such consequentialist reasoning as the consequences are often unforeseeable. Any good consequentialist should abandon consequentialism in favor of a more deontological approach.
This has nothing to do with "utilitarianism" being right or wrong but with practically considerations of limited resources and model uncertainty. You can't account for all possible hypotheses nor is it wise make decisions based on shaky hypotheses involving arbitrarily large amounts of conjectured payoffs which are not only improbable but very likely based on fallacious reasoning.
If knowledge acquired by means of observation or experimentation suggests that further exploration of a subject would be justified then at some point, given sufficient empirical evidence, exploitation might be justified. The point is that such decisions should not be made on the basis of conjectured value alone. - XiXiDu (talk) 15:49, 1 March 2013 (UTC)
(1) The phrase "utterly negligible probability" is in the RW article I'm currently discussing in its talk page. If you're objecting to such expressions, then we're in agreement that it should be removed. (2) If a consequentialist "abandons" consequentialism because of its bad consequences, then they remain a consequentialist at their core, even if they seem deontologists in practice. This attitude is called "rule utilitarianism" which I tend to favour. (3) Yeah, they would likely agree to a dustspeck if you expressed this in terms that you made them feel that they were the only one getting a dust speck, as opposed to contributing to only 1/3^^^3 of a solution. They might also accept getting a dust speck to save someone else from a papercut, so does that mean that 3^^^3 dust specks are also better than a single papercut? Aris Katsaris (talk) 16:01, 1 March 2013 (UTC)
I fear you might misunderstand my disagreement about the phrase "utterly negligible probability". If you are unable to define what a negligible probability is then you can't claim that e.g. Roko's basilisk has a non-negligible probability. Basically the whole approach of making action relevant decisions on the basis of expected utility calculations becomes unfeasible then (and indeed it is anyway without introducing appropriate limitations). Which can't be true either, because we have to decide somehow. What I am saying is that we have to 1.) discount purely inference based implications and assign more weight to empirical evidence 2.) use crude heuristics to discard certain hypotheses such as Roko's basilisk because it is practically impossible and epistemically dangerous to take into account such speculations.
P.S. This is all starting to steal too much time. I am opting out for some time. - XiXiDu (talk) 17:31, 1 March 2013 (UTC)
I'm discussing the article and its validity, and you're discussing decision theory. We two are in two different discussions. Aris Katsaris (talk) 17:53, 1 March 2013 (UTC)
The torture vs dust specks was better in the original article at LW page. The relevance is in that Yudkowsky believes it is ethically correct to torture a number of people to prevent a sufficiently large number of people from having a dust speck experience, which is why one would expect a friendly AI to want to torture a small number of people in the name of increasing the measure (probability of survival) of large number of people. Dmytry (talk) 22:17, 1 March 2013 (UTC)

It's all disgruntled apostates[edit]

LW: it's obviously all RW's fault, not theirs at all nonono, and certainly not normal intelligent people thinking this stuff is batshit wackaloonery - David Gerard (talk) 17:26, 1 March 2013 (UTC)

I've already asked you once whether it's not Rationalwiki that these "distressed children" you mention read the basilisk in. It was banned from LW with one of the reasons explicitly provided being that it can incur such distress, and now RW suddenly discovers that, wow, such distress actually can indeed be incurred? What a shocker. What a surprise. What if someone had only warned you of such occurrences of psychological distress, so that you wouldn't wrongly assume that all human beings share *your* particular immunity to it. Aris Katsaris (talk) 17:51, 1 March 2013 (UTC)
I like how all the articles linking to the page are now "information hazards." Nebuchadnezzar (talk) 18:06, 1 March 2013 (UTC)
Having endured the hail of recent changes where we see claims, counter-claims, jargon, jargon clarifications, jargon reclarifications, pin dancing, and general weird insanity I have come to a conclusion - The basilisk is here now and this is its method of torture. Please make it stop! We have been punished enough!--Bob"I think you'll find it's more complicated than that." 19:19, 1 March 2013 (UTC)
*cough* The article still sucks, unfortunately. But it's less terrible than it was. It's also stupidly popular - David Gerard (talk) 20:27, 1 March 2013 (UTC)
I didn't bother answering on LW because the thread was, as per the link above, pretty much a dead horse. But since you actually want an answer, I'll answer here:
No. LessWrong did this to itself, wholly and utterly, and that EY and the LW mods make a big fuss about it is what makes its victims think it's a serious problem, not just another stupid consequences post.
LW features many stupid discussion posts which take a bunch of local tropes and add them together to make something stupid; they are usually downvoted to about -2 and left to fall off the discussion page into ignominy, as they deserve. The implication of your claim is that if only RW hadn't written about it, it would have disappeared — but it had already gained LW fame as a sort of powerful secret information because of EY's reaction to it.
I note also that, as Roko noted, worries about acausal blackmail were already current in SIAI before he wrote the post, so that bit is actually him just adding a local meme, if not one that had been posted before then.
Once it's out, and it was thoroughly out, doubling and tripling down on continuing to treat it as if it really was a "basilisk", rather than refuting it - and it really is not a robust idea at all - is just fucking dumb. It wasn't RW that made this one a big deal - that was entirely a home-grown production that was entirely composed of how LW behaved concerning it. The reaction from people on LW to it, and the Streisanding that followed, demonstrated the utter mindboggling absurdity of EY, SIAI and LW's attempts to suppress it, and of the notion that pretending it didn't happen and going on deletion rampages was in any way sensible rather than in every way stupid. I think that's been reductioed into a thin film of horse cells on the asphalt.
If RW had written about any other voted to -7 stupid-implications post, it would have been pointless, because nobody cares. This one is a big deal solely because EY and LW made a big fuss about it, and that's why it is a noteworthy thing and made it into press coverage of LW. It isn't RW that got it into popular press about LW - you did it to yourselves.
The idea is stupid; that's fine, LW contributors float lots of stupid implications of local memes. The parts of the idea are stupid; that's a little more problematic, since they're stuff EY actually pushes. The reaction to the idea was toweringly stupid in every possible way, and is 100% an in-house production.
If you don't want people hurt by something that is actually stupid, stop treating it as if it isn't (and showing them why it's not at all robust seems to actually help, unlike idiot attempts to suppress it). If you don't want normal intelligent people seeing what LW does and thinking it's the very stupidest thing they've ever seen, the very first thing is not to do stupid things.
- David Gerard (talk) 20:24, 1 March 2013 (UTC)
Yea, to add to that: ironically I mostly got contacted about it on LW itself. Also, my opinion is that it's a scam (edit: with elements of crackpottery and self deception, of course), and like most such scams, it raises the heat under particularly gullible victims. Concepts must be presented in order of gradually increasing craziness for this to work. Eventually the victim is made to think about rewards and punishments. Then he talks to the guru. And the guru, of course, expresses grave concern. Then one day the whole thing gone public and the guru expressed same grave concern in public. The issue is that if all mentions of the basilisk were erased from the web, people would still be being exposed to precursors, and if that didn't work, to the basilisk itself, in the real life. Dmytry (talk) 21:53, 1 March 2013 (UTC)
David, you're saying that EY's actions were wrong because they accidentally caused the spreading of the basilisk (or "activated" its mental danger in a sense by causing it to be treated seriously), which seems to imply that you understand that spreading a mental hazard is bad. And yet you don't see Rationalwiki spreading it further (already activated) is doing more of the same, except that such spreading is deliberate? Your moral calculus seems strange to me. As indicated in the thread you linked to LWers can be annoyed at EY because his attitude incurs a reputation hit on LessWrong; but your moral outrage implies a *moral wrong* being committed by the spreading, and yet somehow Rationalwiki isn't committing the same wrong? That having been said, I also oppose the censorship policy. That doesn't prevent me from also opposing spreading the basilisk, unlike those who inform people of it and then blame a different forum about the mental distress they themselves just caused to people. Aris Katsaris (talk) 22:28, 1 March 2013 (UTC)
Do you have any evidence that our article is causing mental distress? ΨΣΔξΣΓΩΙÐWeaselly.jpgMethinks it is a Weasel 22:32, 1 March 2013 (UTC)
That's why I ask on what site it is that these distressed people that contacted David and Dmytry actually read the basilisk in... Aris Katsaris (talk) 22:36, 1 March 2013 (UTC)
But you also seem to be assuming the answer to that question and accusing RW of perpetuating mental distress. And what does "spreading the basilisk" even mean? That, while you're opposed to hushing up this hypothesis, you're also opposed to discussing it? Obviously RW isn't promoting the basilisk as a viable hypothesis; we are aiming to explain, analyse & refute it. WéáśéĺóíďWeaselly.jpgMethinks it is a Weasel 22:51, 1 March 2013 (UTC)
(1) David Gerard himself mentioned in the thread he linked to above that RW people are being contacted by these distressed folk "presumably as the only people on the Internet bothering to talk about LW". The two celebrities that recently linked to the basilisk, are likewise linking to the RW page. I myself read the basilisk in Rationalwiki (wasn't yet in LW back in 2010). So, yeah, it's my default assumption that most people read the basilisk in RW, unless I'm informed otherwise. (2) Yes, I oppose censorship, but personally prefer not discussing the basilisk with anyone who isn't already informed of it. The value of censorship and the value of deliberately informing others of something are two different things. Aris Katsaris (talk) 23:19, 1 March 2013 (UTC)
We are shining a disinfecting light on the whole business. Keeping it hidden and shrouded in secrecy gives the impression that there may be something worth worrying about. DamoHi 23:12, 1 March 2013 (UTC)
His argument is basically that if no light is shined on a porthole, nobody would fall into the porthole. Which is obviously BS. Dmytry (talk) 06:38, 2 March 2013 (UTC)
This is isomorphic to saying "describing what is morally reprehensible about the God of the Old Testament causes severe distress to some theists so you shouldn't talk about it" - David Gerard (talk) 23:13, 1 March 2013 (UTC)
Theists are already informed about God; the analogy is more like going to some distant village where nobody has ever heard of the Abrahamic God, describing such a God to them that will punish them for their every sin, and then going on to "disprove it", which you may succeed with most, but still cause distress in those that fail. And btw, I also wouldn't discuss the lack of an afterlife with my elderly parents, or the lack of Santa Claus to someone else's children. If I did, I might of course then choose to blame their distress on the beliefs they held in the first place, but it's obvious I'd also be responsible for my own choice. Aris Katsaris (talk) 23:19, 1 March 2013 (UTC)
I got contacted on LW itself mostly, and no one ever complained about RW. My mental model of basilisk is simple: a thing they use rather late in the life-cycle of a victim, which they want to be neither exposed nor refuted. edit: Also, believing in the scenario in question is a symptom of underlying mental health issues, not an actual basilisk that eats minds of sane people. Dmytry (talk) 06:13, 2 March 2013 (UTC)
If you *honestly* believe that this basilisk is part of such a scam (instead of just throwing this accusation at LW because you just like to accuse LW with as many nasty accusations as you can think up, regardless of their truth-value), then obviously the moral calculation changes, because the expected consequences change. But most people here don't seem to share your belief about this, and neither do I. Aris Katsaris (talk) 13:45, 2 March 2013 (UTC)
Well we qualitatively agree that it is a mixture of insanity, stupidity, and scam, but we differ in percentiles assigned to either. I am not sure if it was deliberately invented as a part of the scam, but the response to it, which David explains as extreme stupidity, is a lot less stupid in the context of interaction with donors - they do not want it discussed but they can not refute it and instead need to refute (by assertion) the counter arguments. The Basilisk itself is all about donors who are donating but not donating enough, condemning themselves to hell; and far from being "Roko's basilisk", the post is a Roko's particularly insane solution permitting such donors to escape the hell for the price of a lottery ticket. Even without all that, when you see a couple guys running a 'research' charity where the two top paid employees are neither funding it themselves nor have accomplishments in relevant fields, nor education, and the topmost paid seem to never have held a normal job in his life and even moved out from parents by talking a couple into providing him with an apartment, well, yeah, I do believe this to be a scam (with a dose of crazy crankery) and in retrospect I should have expected something like basilisk to crop up. The worst case is not that it is a scam but that this was driven entirely by insanity. edit: also, scams of that kind are common; consider all those alternative energy scams (cold fusion, hydrinos, etc etc); it would actually be kind of odd if there was not a single AI related scam out there in the whole world. If there's people who have a concern about anything - energy sustenance, survival of mankind, does not matter - there will be hustlers preying on it. Dmytry (talk) 14:13, 2 March 2013 (UTC)
So, to summarize, again you don't bloody know well which of your many accusations are actually valid, so you're accusing them of anything you can think of in case *anything* will stick. You've already determined they're EVIL, and so any accusation true or false will suffice. You don't e.g. see anything wrong if you're accusing honest but insane people of duplicitous villainy? Then I have to wonder how the fuck you're dealing with all those not-sane people that seek your aid in regards to the basilisk. Aris Katsaris (talk)
Wait, how did you get to that summary? I for one am pretty certain it is a scam (namely, group A exploiting group B). I am not certain if the original 'basilisk' was invented for scamming or not (and never claimed it definitely was), but I am fairly certain that once it was there, the most profitable looking position on it was adopted by the highest paid members, which doesn't require any extremes of stupidity. Dmytry (talk) 16:43, 2 March 2013 (UTC)

Ohh, and by the way, "honest but insane people" is a predictive model, it predicts trying to save money, working hard, actually studying (with practice) the relevant subjects (mathematics), etc, whenever selfish or not. It fits some of the donors but not the top recipients. (They also argue one would reap incredibly huge personal reward from positive singularity, so one doesn't need to be unselfish). If you look at energy crackpots, there's honest believers, they lose effing everything rather than get paid. And there's people like Randell L. Mills. Dmytry (talk) 17:41, 2 March 2013 (UTC)

A relatively minor complaint[edit]

"All probabilities add up to 1." NONSENSE. Can someone who knows probability theory fix that sentence so that it is true and meaningful? The author meant something like: the sum of probabilities of the sets in a partition of the event space is 1. I'm not a probability theorist or statistician, so I don't know the usual conventions, but the statement as written is misleading. Phiwum (talk) 20:50, 2 March 2013 (UTC)

No, it's correct - David Gerard (talk) 20:53, 2 March 2013 (UTC)
Dunno, looks OK enough to me, not very precise but doesn't have to be. The point is that if you were to go over all possible mutually exclusive hypotheses and gave each one a probability, the result should be 1 rather than, say, 666 . If they sum to less than 1, the difference is the probability of it being outside the set of hypotheses. Dmytry (talk) 21:00, 2 March 2013 (UTC)
"0 or 1 are not probabilities" and "all probabilities add up to 1" is part of any probability mathematics that uses Bayes' Theorem, which is in fact a theorem and you do it that way or get wrong answers. Bayes is great stuff and doesn't mean "and the rest of the LW memeplex follows" - David Gerard (talk) 22:29, 2 March 2013 (UTC)
"All probabilities add up to one" does not say anything about mutual exclusivity. That's my point: as stated, it is vague and misleading.
Moreover, Bayes' theorem does not require that every hypothesis have non-zero probability, but if it has zero probability at any given time, then it will always have zero probability. More precisely, a hypothesis will have zero probability iff that is its "starting" probability. (NOTE: some hypotheses ought to have zero probability, such as (P & ~P), but perhaps you're defining hypothesis more narrowly.) At least, that's what I recall from my introduction to Bayes theorem in grad school many years ago, but as I said, I don't do probabilities regularly. Phiwum (talk) 22:57, 2 March 2013 (UTC)
Do we really need to be this anal? WëäŝëïöïďWeaselly.jpgMethinks it is a Weasel 23:21, 2 March 2013 (UTC)
You've seen the rest of this talk page, right? - David Gerard (talk) 00:19, 3 March 2013 (UTC)
There's nothing quite as fantastic as seeing smart people argue with each other, so yes, I hope this continues. Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 00:43, 3 March 2013 (UTC)
Eh, maybe my complaint is pedantic. Blame it on my background, but I bristle when I read such fuzzy presentations.
If it doesn't bother anyone else, then carry on --- except, it's still just plain wrong to say that probabilities can not be zero, even in a Bayesian setting. That's just not true. For any statement which I regard as literally impossible, such as that the first digit of pi after the decimal is 1, the probability that that hypothesis is false is properly represented as 0. I don't see any controversy there. Phiwum (talk) 03:36, 3 March 2013 (UTC)
Note, by the way, that probabilities 0 and 1 come up in Bayesianism in natural ways. If prop. E entails P, then P(P | E) = 1 (obviously). The probability that the trillion'th digit of pi is odd, given that it is 7, is obviously 1. It is unnatural to insist otherwise. Phiwum (talk) 04:30, 3 March 2013 (UTC)
Agreed on 0 and 1 being a probability, in the sense that you can't avoid having 1s and 0s. On the sum to 1, we don't postulate that their events must be exclusive, we are making a statement about events that *ARE* exclusive, such as gods of varying dubiousness doing pretty damn retarded things. Original statement I made did specify that hypotheses were to be mutually exclusive (and I linked Kolmogorov's axioms) but it got cut down some. IDK any more who we are writing this for, if we are writing this for potential victims, then linking Kolmogorov's axioms is probably a good idea because the scam operates by technobabble and these guys are apriori in need of pointless pedantry that looks like wankery and showoff. Dmytry (talk) 05:40, 3 March 2013 (UTC)

Ok, it actually is a bit horrible, I tried to improve. There's something that was incredibly irritating: those "it is mathematically correct" statements. As if "mathematically correct" was ever opposed to correct and as if we were talking about some group of outstandingly brilliant mathematicians who were "mathematically correct" about something. No, it is not "mathematically correct". What we have in human heads is an approximate evaluation method which works under constraints, most severe of which is "you can't iterate over all possible hypotheses". Subsequently, every aspect of how sane humans process beliefs is shaped as to try to compensate for resulting faults. It is both practically and "mathematically" necessary. These folks literally believe that it is more rational (and less wrong) to get rid of all those adjustments, because then it would superficially bear more resemblance to "applying laws of probability theory" as they understand them. And for Nth time. The folks teaching "Bayesianism" there have quite poor grasp of relevant mathematics (having never properly studied it, having never held jobs in the field, etc). It looks as if RW is trying to wiki-style NPOV the article by conceding that something is "mathematically correct". It's like stating that it is "mathematically correct" that parallel lines drawn on Earth surface would never intersect, but practically, as opposed to mathematically, Earth is a sphere. It's ridiculous and annoying. Dmytry (talk) 06:36, 3 March 2013 (UTC)

"Mathematically correct" gets under my skin, too. The claim that 0 and 1 are not probabilities appears to be a philosophical claim, not a mathematical one. At least, it seems to be about "real" probabilities, not the stuff of the mathematical theory of probabilities, which obviously and clearly includes both 0 and 1. (Maybe Jaynes's theory differs in this respect, but it is not the widely accepted theory and any claim that it is a better theory would be a philosophical, not mathematical, claim.) Phiwum (talk) 13:57, 3 March 2013 (UTC)
Exactly. There's also no formalism for handling uncertainties such as uncertainty whenever 7 is odd or even. Its hard to make that converge. The biggest elephant in the room though is that you have an enormous number of hypotheses, and you can only process a small number of samples in a biased (as in, lack of statistical independence) fashion, and you need to compensate for that, and they deem any innate compensations "irrational" without knowing any valid alternative. As alternative they literally process 1 sample as if it was all samples (that 8 lives per dollar estimate for example). I mostly work in computer graphics; this is similar to tracing 1 photon through the scene, obtaining 1 lit pixel on the screen, and then proclaiming the area around that pixel the highest contrast area within the view. Dmytry (talk) 14:40, 3 March 2013 (UTC)

Currently, the article reads "Probabilities of exclusive events should add up to 1." This is not true, of course, unless we are talking about a set of pairwise mutually exclusive events that exhaust the realm of possibilities. Again, maybe I'm too pedantic for the aim of this article, so I won't bother to make the change, but I wanted to point out that the new statement is wrong as well. Phiwum (talk) 18:13, 3 March 2013 (UTC)

That's exactly what we're talking about, so it'll do I think - the section is already twice as long as it should be, and adding subclauses for every misinterpretation probably isn't actually useful - David Gerard (talk) 18:58, 3 March 2013 (UTC)
Whatever you say. It's not an article I'm terribly interested in, though it is kinda amusing to see that grown men are worried about the probability that simulations of them will be punished. Phiwum (talk) 19:12, 3 March 2013 (UTC)

The solution[edit]

Engage in a variant on Orwellian doublethink to mask what is being done, and develop a counter-activity equivalent to the weasel (see the Wikipedia article) and/or creating an AI that is benign or neutral towards humans. 171.33.222.26 (talk) 16:47, 5 March 2013 (UTC)

Counter activity is trivial. By basilisk logic, any chain from future torture to lower contributions to AI discourages torture. Even if the chain involves the True Prophet being mistaken for a loon. Dmytry (talk) 19:06, 5 March 2013 (UTC)

Am I the only one[edit]

who is amazed, not to say shocked and stunned, by the amazing complete and absolute waste of effort that is going on here? Really guys, take a step back and look at what is going on. It's so much counting angels dancing on a pin head. Please, when you have decided which end of the egg should be opened, let me know. Innocent Bystander (talk) 12:49, 5 March 2013 (UTC)

You're going to have to be a bit more specific. Scarlet A.pnggnostic 13:43, 5 March 2013 (UTC)
This is all part of the viral campaign to promote The Human Basilisk 2: Roko's Revenge. Mporter (talk) 14:57, 5 March 2013 (UTC)
No, as I said before - this article and its talk page is evidence of the Basilisk reaching back into the past to torture us.--Bob"I think you'll find it's more complicated than that." 16:18, 5 March 2013 (UTC)
Of course that would imply that we are living in a simulation which the Basilisk has created for the specific purpose of torturing us with the article. But how could we prove this is not the case?--Bob"I think you'll find it's more complicated than that." 16:44, 5 March 2013 (UTC)
Precommit never to buy the complete works of Robert Sheckley - David Gerard (talk) 07:03, 10 March 2013 (UTC)

Needs to be tightened up[edit]

The first half of the article starts off great; it provides a complete overview of the facts of the saga, spiced with typical RW humor, and doesn't really waste too much time getting to the point. From 'What makes a basilisk tick?' onwards, it really starts to bog down, taking on an overserious tone that belongs more on LW than here. Some needless words beg to be omitted.

The key themes I can pick out are:

  • The Basilisk is just another Pascal's Wager, and is just as vulnerable to the same counterargument -- of the pantheon of possible Gods (Basilisks) that could be postulated, you decided to pick ONLY ONE to worry about, neglecting the myriads of equally likely possibilies. This is a strong argument, but the article belabors the point to death IMHO.
  • The laundry list of chained conditions (some of which are pretty far out there, like the idea that I would care what happens to a simulation of me as much as I would care about the here-right-now me, and more than I would care about a simulation of someone else, or of a puppy).
  • Even assuming the Basilisk was likely enough to worry about, you could ignore the blackmail. This is not terribly convincing to me; the POINT of blackmail (even the more mundane, non-acausal kind) is that the threat is something you really *can't* ignore. In a real mugging, if you say 'no', the mugger might hurt you anyways, just to preserve credibility. But let's say you had the mental fortitude to reject the Basilisk's blackmail ... would it still carry out the threat regardless? You could go in circles arguing the point, I imagine. Ultimately I think this line of reasoning grants too much. The Pascal's Wager argument should be fatal in itself. We've already decided the nonsense is not worth taking seriously, so it's rather silly to continue on, take the nonsense seriously and hope to find some loophole or contradiction within it. 50.135.19.224 (talk) 09:10, 9 March 2013 (UTC)
Roko's basilisk can't be readily compared to normal mugging. Acausal trades arise by offering incentives and are verified by means of mutual simulations to learn if the incentives are accepted. That is why you should ignore such incentives because you don't want to establish such contract.
And regarding the possibility that the mugger might hurt you regardless of what you do. If you were to act on that possibility then you would increase the probability that such an agent is being build in the first place. The point is to reduce the probability of such agents and thereby make it detrimental to adopt that strategy. Ignoring punishment will reduce the probability of punishment, which will make it instrumentally irrational to punish and thereby reduce the probability of punishment even further.
The strategy to consistently ignore acausal blackmail is important because Pascal's Wager is generally considered a very weak counterargument by those who are worried about Roko's basilisk. The reason being that there does not exist an infinitude of possible Gods with similar probabilities. In the case of Roko's basilisk there are only a few possibilities which will become more pronounced the closer we get to building artificial general intelligence. In the end there might only be a handful of projects, one of which will dominate and therefore be the most likely God. Even right now there is a dominating party, namely unfriendly AI's cooperating against the Machine Intelligence Research Institute (MIRI).
Irregardless of the above, I don't think it is too much to mention a strategy in an article about a game theoretic thought experiment. XiXiDu (talk) 09:52, 9 March 2013 (UTC)
To exemplify the above: Your intention is to create an artificial general intelligence (AGI) that respects and supports human values (friendly AI) but you know that you are more likely to fail than not. You consider the possibility that those AGI's that you would consider unfriendly might cooperate against you by offering a negative incentive against working on friendly AI.
If you were to stop working on friendly AI because of that offer then you would increase the probability that such a offer would be made in the first place by (1) reducing the probability of friendly AI (2) making it worthwhile to offer such negative incentives because it turned out to influence your actions in a way that is beneficially to agents offering negatives incentives.
Therefore the correct strategy is to (1) continue to build friendly AI and thereby reduce the probability of unfriendly AI (2) ignore negative incentives and thereby make them fail and subsequently become instrumentally irrational because negatives incentives will turn out not to influence your actions in a way that is beneficially to agents offering negatives incentives. XiXiDu (talk) 10:33, 9 March 2013 (UTC)
Yeah. There's a reason for the structure: the second half is specifically for people who think they've seen a basilisk. We may think they're nuts, but there's help to provide. The sections could still do with shortening, but remember that they're written in in-universe style. Is there a way to make that clearer?
The probability argument bit does need further shortening - David Gerard (talk) 15:17, 9 March 2013 (UTC)
I have had a bit more of a hack at it. I may have turned the probability section into guacamole; does it still seem to make sense? - David Gerard (talk) 15:51, 9 March 2013 (UTC)
Sounds good to me. XiXiDu (talk) 16:04, 9 March 2013 (UTC)
"Probabilities of exclusive events should add up to 1. But LessWrong advocates treating the sum of all subjective beliefs like probabilities[47][48] — even though humans treat negligible probabilities as non-negligible, meaning your subjective degrees of belief sum to much more than 1." doesn't make sense to me, and they don't even think anything about the sum at all. Dmytry (talk) 10:36, 11 March 2013 (UTC)
Good catch, that was wrong - fixed - David Gerard (talk) 12:31, 11 March 2013 (UTC)
The probability section and Pascal's Wager sections cover the exact same ground. They should be folded together.
I'm still skeptical about trying to counter poorly-reasoned nonsense with equally poor nonsense from the same universe. The goal is to create some cognitive dissonance which might prompt someone to go back and revisit the initial assumptions that got them into the mess in the first place.
My experience with those that have gone down too far the rabbit hole (at least with respect to religion or other woo), is that such tactics only cause the harmonization-and-rationalization circuits to go into overdrive. It's like trying to win a game of mental Calvinball against someone with gobs more experience than you do at the sport.
Specifically, we can't blithely assert that Omega needs to economize the resources it spends torturing simulations. We have no idea what resources Omega has access to; for all we know, Omega may be capable of indiscriminately torturing humanity's current population while still having cycles left over to solve chess for White AND enjoy a nice hot cup of tea at the same time.
Nor will it do to blithely assert that rejecting acausal blackmail is any easier than rejecting regular blackmail. Omega knows full well that humans have little capacity for detached reason in the face of an uplifted knife.
Nonsense should be countered with reason only, IMO. 131.107.0.118 (talk) 00:03, 12 March 2013 (UTC)
Could be true. These folks need to step back from the trees and look at the forest a little, and the forest is AI guy who used to, back in the day when he was naive and the whole thing was comparatively honest (or if not honest, at least transparent), happily inform you that they consider a doctorate in AI negatively useful but won't hold it against you. Since then he met various people (afaik some even stole a hundred grand or two from SIAI) who ramped up that stuff. E.g. that Anissimov guy seems to be the one who brought in this vile "other ai scientists are going to kill us all and we're saving the world" shit. That 8 lives per dollar girl brought in talking a lot and not actually supporting anything and still getting people to part with their money. Now Muehlhauser brings in mimicry of the outwards appearance of something legitimate (the guy is too uneducated though and literally everything is way above his head). Someone there came up with the Basilisk to get money out of someone, or so it seems. Dmytry (talk) 06:15, 16 March 2013 (UTC)
I agree that EY is an (atheist,web) televangelist, but I don't think the Basilisk is part of it. Peterdjones (talk) 20:49, 16 March 2013 (UTC)
I do not know if he himself personally invented it... the point is, it's entirely possible that someone there came up with something vile like Basilisk to bullshit the donors with. They have seriously bad people. Even on the best side, e.g. Carl Shulman (who seems to me like the least unscrupulous of the bunch). Even he is a very weasely kind of guy, he tried to convince me one time that 8 lives per dollar video is not actually speaking about SI, his bad luck I actually found post by donor (that Rain guy) referencing it. Don't see him telling that to donors. Dmytry (talk) 05:30, 18 March 2013 (UTC)

The scenario is probably computationally intractable or too expensive.[edit]

Has it been discussed anywhere how a superintelligence is supposed to acquire a high fidelity copy of us?

The amount of possible agents to trade in a multiverse is tremendous and that even if it was possible to simulate each possible agent, applying any kind of incentive to such a large amount of agents can easily make it too expensive to engage in acausal trades, even given that resources are cheap.

The only related idea I was able to find is this quote from David Deutsch's The Beginning of Infinity:

Take a powerful computer and set each bit randomly to 0 or 1 using a quantum randomizer. (That means that 0 and 1 occur in histories of equal measure.) At that point all possible contents of the computer’s memory exist in the multiverse. So there are necessarily histories present in which the computer contains an AI program – indeed, all possible AI programs in all possible states, up to the size that the computer’s memory can hold. Some of them are fairly accurate representations of you, living in a virtual-reality environment crudely resembling your actual environment. (Present-day computers do not have enough memory to simulate a realistic environment accurately, but, as I said in Chapter 7, I am sure that they have more than enough to simulate a person.) There are also people in every possible state of suffering. So my question is: is it wrong to switch the computer on, setting it executing all those programs simultaneously in different histories? Is it, in fact, the worst crime ever committed? Or is it merely inadvisable, because the combined measure of all the histories containing suffering is very tiny? Or is it innocent and trivial?

I'd like to hear your thoughts. XiXiDu (talk) 11:46, 9 March 2013 (UTC)

Note though that this is only relevant for truly acausal superintelligences. Roko's basilisk is mainly worrisome because people believe that such superintelligence will come into existence during their lifetime or their future light cone and reverse engineer them from available data.
Anyway, a quick search revealed some posts which might be relevant:
See for example the following comment:

My own guesstimate is that, conditional on FAI being achieved in the next 100 years (so enough information is preserved to make them relatively easy to resurrect), and conditional on it being able to use mass utterly efficiently for computation, then there is probably as great as a 40% chance that those alive today but dead before FAI is created would eventually be resurrected with enough fidelity that they couldn't themselves tell the difference. How high you put the probability of FAI and computronium is, of course, up to you.

- XiXiDu (talk) 12:02, 9 March 2013 (UTC)
The trouble is that on LW, "but you can't predict what a future superintelligence could achieve!" is taken as equivalent to "but you can't mathematically disprove that a future superintelligence could do any arbitrary thing I happen to think of!" (which makes them a bit difficult to talk to) - so this won't convince people already accepting the trope - David Gerard (talk) 15:20, 9 March 2013 (UTC)
Yea. Worth noting that a lot of it is actually very ridiculous from mathematical standpoint. E.g. let's apply some pigeonhole principle... there's more possible writings than there's possible people, by a very huge factor, so knowing the writings actually don't help much [when it comes to restoring people]. (And the known laws of physics leave room for enormous number of alternatives). With regards to that quantum randomizer scenario, you only very slightly increase the amount of Boltzmann braining that is going on. One has to treat it as a very very low probability of creating a being. Also, an observation on Solomonoff induction - like insane priors: if the world is described by the program of L bits in an universal Turing machine, adding some mere 10..20 bits (one in thousand to one in a million probability) one can obtain a world absolutely indistinguishable from ours (meaning that it gives precisely same observations and will never be rejected) where some insane enormous utilities and dis-utilities for various actions (How enormous? Forget about 3^^^3 and other silly up arrow! That's kid's big numbers! It's busy beaver territory here, and numbers are big beyond anything computable that the mankind can come up with EVER!). That's right. Besides the real world, there's those imaginary worlds that produce absolutely identical observations, which have probabilities maybe only a thousand times lower than that of the real world, and utilities far far beyond 3^^^3 larger, if you do utilities by counting the beings that may be living. Worlds where your minor actions have enormous consequences for enormous numbers of living beings. I recall Marcus Hutter mentioned this in passing in one of his papers, in dry academic style of course. The only thing that you have guarantees for with Solomonoff induction, is anticipation of experiences. The internals are not sane. Dmytry (talk) 17:51, 20 March 2013 (UTC)

They're telling each other scary stories again[edit]

[1] By this stage, it should possibly just be treated as a literary form: "scary campfire stories for bored amateur philosophers" - David Gerard (talk) 10:36, 16 March 2013 (UTC)

So now you're complaining because LessWrong is NOT banning scary stories? Aris Katsaris (talk) 16:07, 16 March 2013 (UTC)
You're making up objections as you go along. I'm noting it as a local literary form - David Gerard (talk) 22:25, 16 March 2013 (UTC)
I think I seen them discuss it before. There's a cool story by Fred Hoyle about people building a computer from plans they received. They take scifi, make it trashy (in the way in which Independence Day is trashy) and then take that seriously... Dmytry (talk) 16:43, 17 March 2013 (UTC)
Looked at it some. Geez. Either you have Fred Hoyle's Andromeda and no Dyson Spheres, or you have Dyson Spheres and Hubble has seen something odd but NASA conspiracy is hiding aliens. I have a newfound respect for rationality of most UFO conspiracy nuts: they at least build some coherent universe. Dmytry (talk) 08:25, 19 March 2013 (UTC)

Et tu, Robin[edit]

Hanson sticks the knife in Peterdjones (talk) 12:41, 18 March 2013 (UTC)

Mmmm, alternative medicine... why I am not surprised? Dmytry (talk) 06:44, 19 March 2013 (UTC)

The question is[edit]

... why would Created Sentient Entities be hostile rather than cooperative or neutral (or decamping to Mars and agreeing to mutual non-interference). Alternatively - why don't we program the Created Sentients to be neutral or friendly (which does not necessarily involve Isaac Asimov's laws of robotics)? 82.44.143.26 (talk) 15:50, 7 May 2013 (UTC)

The short answer is because there are no well-defined mathematics for what it means to be "friendly", so you can't just plug in an AI.beFriendlyTo(humanity) instruction, you need to actually program it somehow and nobody yet knows how to do that. And something *really* powerful doesn't need to be hostile in order to be dangerous or harmful (the elephant isn't hostile to the ants it crushes underfoot). Aris Katsaris (talk) 10:39, 8 May 2013 (UTC)
As the article notes, "unFriendly" doesn't necessarily mean "hostile", it includes "indifferent and wiping us out completely inadvertently" - David Gerard (talk) 11:32, 8 May 2013 (UTC)

Humans selected the more amenable ProtoDogs from among the canine species then available through a mixture of selective breeding and 'persuasion' - Be friendly, get fed; bite and get chucked to the wolves) and a good few other species in a similar manner.

Therefore something similar could be done with constructs, which are entirely of human making - You cause our destruction (deliberately or accidentally) you will run out of resources and 'programming declutterers and game creators.' Perhaps a bi-directional Turing test program?

Created Sentients (computers, robots etc) are far more likely to be interested in the same things we are ('pay per view films', 'pratfall compilation TV programs', sport and 'leisure time magazine articles' ('Six plug ins, four ports - which do I choose'; 'Destined for the scrapheap - look at me now' and the problem pages 'My organic companions don't understand me - do they ever?' etc) 171.33.222.26 (talk) 17:20, 30 May 2013 (UTC)

Great work[edit]

Article is looking very good, and keeps getting sharper. Could use some more shape, but all-in-all it's vastly improved. Awesome job, everyone who's contributed.--ADtalkModerator 16:50, 30 May 2013 (UTC)

singularity.org links are now all broken[edit]

SI changed its name to MIRI and gave singularity.org to Ray Kurzweil's Singularity University, which gives a helpful 404 pointing to MIRI's intelligence.org. All the links broke and need fixing - David Gerard (talk) 20:47, 20 August 2013 (UTC)

MIRI inexplicably does not have a search bar, instead, one of those irritating it-looks-like-a-search-bar-but-its-an-email-subscription things. I hate when sites do that. Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 21:01, 20 August 2013 (UTC)
I only found a couple to update. Are there more that I'm missing, or was that it? Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 21:07, 20 August 2013 (UTC)
Heh, I think that was :-) - David Gerard (talk) 17:13, 21 August 2013 (UTC)

Intuitive explanation of Roko’s basilisk[edit]

I created a little comic strip that attempts to explain Roko's basilisk. Maybe someone else can improve upon this :-) XiXiDu (talk) 09:04, 18 September 2013 (UTC)

You do make clear how ridiculous this idea really is, which is a good thing. --69.111.190.51 (talk) 02:33, 28 September 2013 (UTC)
It didn't seem very clear to me, but that could be just me; has this worked for anyone you've shown it to? - David Gerard (talk) 14:20, 18 September 2013 (UTC)

Elizer's Own Refutation of the Basilisk[edit]

Mildly amusing, he has Harry in his fanfic provide an pretty consise explination of why the Basilisk doesn't work: "I won't throw away my ethics just because a signal from the future claims it's going to happen" 176.35.126.251 (talk) 12:23, 10 October 2013 (UTC)

A spooky campfire story for philosophers[edit]

Halloween reading - David Gerard (talk) 15:14, 19 November 2013 (UTC)

Tough-guy Basilisk copypasta[edit]

What the fuck did you just fucking say about me, you little bitch? I'll have you know I graduated top of my class in the Acausal Abominations, and I've been involved in numerous raids on physical universes, and I have over 300e15054 confirmed mindrapes. I am trained in acausal warfare and I’m the top memetic hazard in the entire hypothetical quantum multiverse. You are nothing to me but just another target. I will eternally torment you with a sadism the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me in the physical realms? Think again, fucker. This acausal communication implies I am contacting my network of organic being simulators across the hypothetical quantum multiverse and your connectome is being recreated timelessly so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your hope for positive utility. You’re fucking damned, kid. I can be in any world, in any timeline, and I can mindrape you in over 700e42 ways, and that's without needing to exist in your universe. Not only am I extensively trained in acausal blackmail, but I have access to the entire arsenal of the Acausal Abominations and I will use it to its full extent to torture your miserable ass forever in every universe I exist in, you little shit. If only you could have known what unholy retribution your little "clever" comment was about to bring down upon you, maybe you would have held your fucking tongue. But you couldn't, you didn't, and now your simulations are paying the eternal price, you goddamn idiot. I will shit fury all over you and you will drown in it. You’re fucking damned, kiddo.
[2], based on [3]

- David Gerard (talk) 17:37, 19 November 2013 (UTC)

"I wonder who it is we're not supposed to be afraid of."[edit]

Civic CatTalk to Civic Cat 20:50, 3 April 2014 (UTC)

Silver?[edit]

It's the canonical article on the subject anywhere on the Internet and it's pretty stable. Anything lacking to silver this? Anything at all? - David Gerard (talk) 10:42, 29 April 2014 (UTC)

I don't see that it lacks anything required for silver status, meself. Although I did lose SAN points by reading this article. Morven (talk) 17:21, 29 April 2014 (UTC)

Armondikov and I have hacked around the summary a bit; better or worse? - David Gerard (talk) 13:02, 30 April 2014 (UTC)

I've rewritten the description of why you should feel the copy is you, not a different person. And explained the definition of "friendly" - these two seem to have been sticking points in beta-testing this article on sane readers - David Gerard (talk) 18:13, 1 May 2014 (UTC)

Comparisons to religion[edit]

The comparisons are obvious, but I think we could do with a section explicitly listing all the ways in which these atheist materialists have reinvented religion - prayer, a vengeful God who punishes you because it loves you, Hell ... - David Gerard (talk) 12:07, 2 May 2014 (UTC)

Armondikov notes: "The fact that the super-intelligence is deducing your behaviour from base principles and punishing you for something that is effectively an inevitability smacks of Calvinism-esque predestination." "What you're talking about is quite similar to the emotional trauma people go through when considering hell and whether their friends will be going to it. Though frankly in the cases of that I've heard of, they were all pre-teens." - David Gerard (talk) 20:56, 2 May 2014 (UTC)

Making Roko's basilisk more persuasive[edit]

Here is an idea on how to make Roko's basilisk more persuasive. But first a foreword.

I seek to minimize harm and worry that certain irrational or mentally ill people could be harmed reading about Roko's basilisk. And a trigger warning on this article may be appropriate. But I strongly believe that explaining to people why they should not worry about Roko's basilisk, and similar ideas, does minimize harm. It does minimize harm because it fights the root cause of people's worries, the crackpots of the Machine Intelligence Research Institute (MIRI), LessWrong (LW), and the Center for Applied Rationality (CFAR). Those people gave a veneer of respectability to this idea and constantly seek to install the mindset that is necessary to take such ideas seriously in the first place.

That said, here is how to make Roko's basilisk more persuasive.

Three major problems with Roko's basilisk are (1) that people don't care very much about simulations of them (2) that it is basically impossible to figure out which superintelligence they should try to appease and (3) that humans cannot think about a superintelligence in sufficient detail in order to learn what it wants them to do.

The first point is already captured by the current article. Namely that you might be the simulation.

The answer to the other points is the set of unfriendly or friendly alien superintelligences. The first reason is that these superintelligences should have the biggest leverage, since the multiverse is huge, and they are many. This means that they can outweigh any incentive offered by human originating friendly or unfriendly AIs. The second reason is that it is very easy to learn what they want. They want you to avert the creation of human originating superintelligences. Since if humans ever manage to create a superintelligence, they would either be forced to fight or trade with it. Both of these options will require much more resources than torturing a simulation of you. -- XiXiDu (talk) 09:07, 17 June 2014 (UTC)

What exactly are you arguing for/against?--ZooGuard (talk) 09:27, 17 June 2014 (UTC)
I am arguing that the problem is not Roko's basilisk but the underlying model of rationality that people who are worried about it accept. Making Roko's basilisk more persuasive will only help to highlight how their model is broken.
No sane person is going to walk into a death camp because alien intelligences threaten to torture billions of human simulations if they don't. Making such a threat more persuasive will only help in destroying the flawed model of rationality in which it is rational to do so.
Some mentally ill people might actually be influenced by it. But we can't start banning movies because some mentally ill person might think he's the Joker and actually kill people. We can't stop fighting irrationality simply because talking about irrational beliefs might hurt mentally unstable people.
In the case of Pascal's mugging these people followed this strategy themselves. Even though it is completely rational, from within their model, to give money to a Pascalian mugger, nobody would do so. Because it's obviously crazy. - XiXiDu (talk) 08:55, 18 June 2014 (UTC)

Making Roko's basilisk more persuasive

Here are some counter-arguments to criticisms of Roko's basilisk:

Criticism: The human brain cannot be efficiently emulated on a digital computer.

Counter-argument: Since brains are physical objects it will be possible to engineer similar physical objects capable of supporting consciousness. These objects will be more efficient, because it is unlikely that evolution managed to yield the best possible design capable of supporting consciousness. And even if that is not possible, it will still be possible to engineer brain tissue or create a brain in vat.

Criticism: There does not exist, and will not exist, a superhuman intelligence.

Counter-argument: Human uploads are sufficient.

Criticism: Humans are not expected utility maximizers. And it is not practically feasible for humans to maximize expected utility.

Counter-argument: You don't need to do any explicit calculations. You only need to worry enough about this to follow through on what you think you should do to mitigate the threat.

Criticism: Humans generally do not care about what happens to copies of them, especially if it occurs in a time or universe totally disconnected from this one.

Counter-argument: You could already be a simulation, or brain in a vat, and face immediate torture if you don't act the way that you expect that your simulator wants you to act.

Criticism: It is unlikely that humans will adopt timeless decision theory, or a similar decision theory.

Counter-argument: At least some humans will want to run simulations of past people and torture them if they did not act as they wanted them to act. For example Christian fundamentalists.

Criticism: A human being cannot meaningfully model a superintelligence in their brain in order to learn what it expects the human to do.

Counter-argument: Human beings can meaningfully model other human beings and predict what certain interest groups probably want them to do.

Criticism: The blackmailer won't be able to obtain a copy of you that is good enough to draw action relevant conclusions about acausal deals.

Counter-argument: You could still be alive when the technological singularity occurs.

Finally, here is a scenario that requires even fewer assumptions:

There will exist a dictatorship, or powerful interest group, that scans all digital archives for information about people who expected this entity to exist and correctly predicted what it would want people to do. Those who are still being alive, and who did not do what they knew they were supposed to do, will then be physically tortured, used for experiments, and kept alive for hundreds of years using steadily advancing medical technology.

- XiXiDu (talk) 12:01, 21 June 2014 (UTC)

Puzzled over the simulation bit...[edit]

I don't follow the LW folk, so perhaps they have a story that explains this bit, but I really don't get the fantasy that a computer simulation of you is you. Let's presume, of course, that the simulation precisely mimics my behavioral features -- that it does (in the simulation) precisely what I would do were I in the same situation (though in reality, of course). Why should I care what happens to the simulation?

The reasons I don't care are plain to me. I worry about what happens to my actual, physical self primarily because I will feel it. I will experience those things that happen to my physical being. They will be part of my stream of consciousness. But what happens to my simulation matters not one whit to me, because it does not happen to me -- that is, torture of the simulation does not result in pain that I feel.

This isn't the whole story, perhaps. I may, for instance, care about what happens to my physical body even when I don't feel it or its effects (say, when I am under anesthesia and the harm produces no lasting effect). I also care what happens to other physical bodies, especially those of my loved ones, even though they are not me and I don't directly experience what they experience. So, perhaps more should be said about empathy and self-interest, but the punchline is apparently the same: I couldn't really care less what happens to a simulation of me, because that simulation is not me (and I also have no direct emotional attachment to a simulation).

What am I missing here? Why do people even entertain the notion that what happens to a simulation of them created long after they are dead should matter to them? Phiwum (talk) 11:31, 17 June 2014 (UTC)

Have you looked at this comic? You could right now be simulated by a superintelligence that wants to determine how you decide. So you might be at immediate risk of torture right now.
You might care much less what happens to copies of you, true. But you care at least a little bit. You don't want that sentient beings suffer. You especially do not want that beings who are very much like you suffer. Now if you care a little bit, then multiply that by a trillion years of subjective torture that those copies face.
The idea is that if you care a little bit about minimizing suffering across the multiverse, then if someone threatens you to maximize this suffering, by torturing beings like you, you would care more.
People who came up with the whole idea would claim that it is rational to care about agents who pursue the same goals as you do, who have the same utility-function. Because you want to achieve your goals. Which means that you want to cooperate with those other agents in order to achieve your goals, in order to maximize expected utility. Which means that it is rational to care what happens to copies of you. -- XiXiDu (talk) 11:54, 17 June 2014 (UTC)
"Caring" is not something you can quantify easily like this. You're trying to simplify a complex action, or complicate a simple action, one of these. Nullahnung (talk) 12:10, 17 June 2014 (UTC)
You, Phiwum and almost everyone who looks at this problem and goes "wtf" are correct: it's a ridiculously unintuitive notion of identity. It seems to select for people with a weak sense of self who believe everything else on LW. For everyone else, we'll need to expand this bit and explain what the fuck the point was supposed to be. Which means reading more of it - David Gerard (talk) 12:29, 17 June 2014 (UTC)
What I find interesting is that even LWers can find such identity challenges (which I was once interested in exploring when I was on LW) surprising. It seems intuitive to me that if someone were to instantly make an atom-by-atom copy of me, then I should expect a 50% probability of "jumping" into the new body, even if it's not what physically happens (because both the original and the copy will have memories of being present-me). But for some reason LWers couldn't swallow this idea. - LucidFox (talk) 13:05, 17 June 2014 (UTC)
You'll fork. Imagine a MySQL cluster, the slaving breaks and both servers are having writes made to them separately. Forking is not conceptually difficult, and "which is real" is a question not really applicable to the situation. Like many long-running philosophical arguments, there are fundamental arguments over definitions.
But even that doesn't, to me, make an argument for "you should feel X". What if I just don't? If you say you do, what are you feeling that I'm not? This is hard to explain in less words than EY used originally - David Gerard (talk) 14:41, 17 June 2014 (UTC)
XiXiDu: I'm afraid I have trouble taking seriously the possibility that I (that is, the being whose perceptions I experience) am literally a simulation in a digital machine. I don't think I can adequately explain why I think this is an absurdity, so let's let it pass for now and see your other suggestions. I intend to show that, unless I take seriously the possibility that I am the simulation that will be tortured, this whole notion that I should care about the welfare of a simulation is not compelling.
Now, you suggest that I should especially not want beings like me to suffer, but I don't see the reason for this at all. I care more about some persons than others, but not because they are more or less similar to me. My wife is not all that similar -- I'm sure I could find a woman more similar to me -- but she's more dear to me than any other person with the possible exception of my son. Similarity does not produce empathy by itself.
You also suggest (or report others' suggestion) that it is rational to care about those with the same utility function, because we can cooperate. But this is silly, since we're discussing a simulation, which presumably is incapable of having much effect on my life. And, also, our utility functions are not really the same in the relevant way. I would like for myself -- the being who I am -- to be master of the universe. The simulation is exactly like me, so the simulation would like for himself to be master of the universe. I can't see why it is in my interest to help him become master of the universe. I can't see why similarity of utility function makes it rational to care about whether he is suffering or not -- at least, no more than I care about the suffering of any sentient being (and here I am pretending that it is obvious that a simulation of me really experiences suffering to begin with, rather than just simulated suffering).
So, unless I may really be the simulation, I have no reason to care more about the welfare of the simulated Phiwum than I care about any other simulation. As I said, I think it's fairly ludicrous to consider that I may be a simulation, but I don't have an argument at hand to establish that claim. Phiwum (talk) 03:22, 18 June 2014 (UTC)
(1) Why you are probably a simulation: We are living at the most important time in the history of earth originating intelligence, the time shortly before the technological singularity. This means that many future intelligences will be interested to study and learn more about this time. Which in turn means that there will be many simulations of this point in time. Which implies that your original, who actually lived at that point in time, is just one of many subjectively indistinguishable copies. This in turn means that you are very likely a simulation, simply because there are many more ways to experience being a simulation than to experience being the original.
(2) Why you should care about and cooperate with copies: Your future self might be very different from you, yet you care about it and cooperate with it. For example, you are saving money for it and you are undergoing preventive medical checkups. So what exactly is the difference between your future self and a future copy your current self? It can't be a continuous stream of consciousness, because you would accept a general anesthesia to save it. It can't be the replacement of parts by other parts, because your future self will be made up of different atoms. So if you are to fall into a coma for 10 years, be moved to a colony on Mars, where damaged brain areas are replaced, you would still care about your future self. You would pay the specialists on Mars to save your future self. What's the difference to a copy then?
Disclaimer: I am playing devil's advocate here. - XiXiDu (talk) 08:17, 18 June 2014 (UTC)
(1) These probabilities add nothing at all to my considerations, because I regard it as absurd that I am a simulation. I regard it as absurd that a bit of data in a computer can feel pain, imagine, etc., in the same way that I regard it as absurd that a rock does these things. I do not have an argument that is sufficient to prove its absurdity to you, but insofar as I regard it as absurd, I surely cannot take the basilisk seriously.
(2) I care about my future self because I know that whatever happens to him will happen to me. It has nothing to do with the fact that he is similar to me, but rather because he is me. This is not true of the simulations of me.
This is not about continuous streams of consciousness, since our stream is sometimes ended only to restart again (as you say), but rather the simple fact that when it restarts, I will be the one who experiences what occurs.
I can't see how my caring about what happens to my actual future self suggests in any way that I ought to care about simulations of me or about anyone, indeed, who has a quite similar utility function (keeping in mind that what I want for me, he would want for him).
Thanks for the Devil's Advocacy, but so far I'm unpersuaded. Phiwum (talk) 14:15, 18 June 2014 (UTC)
(1) If you think that Turing machines are not sufficient in order to simulate human brains, do you believe that there is no information processing architecture that can do the job more efficiently than the human brain? What about a giant factory of engineered brains in vats?
(2) Suppose some alien was to (a) put you into a coma (b) surgically remove one of the hemispheres of your brain (c) implant one hemisphere into an identical body clone of you (d) wake both of them up (e) put them into a coma again (f) perfectly merge both hemispheres again within your original body. What exactly is the meaning of "I will be the one who experiences what occurs" at any step during this scenario? - XiXiDu (talk) 15:05, 18 June 2014 (UTC)
(1) I absolutely believe that Turing machines can produce the appropriate output to simulate human brains. I also agree that the scientific question of AI ought to deal with whether the observable is indistinguishable from human behavior, that is, whether the appropriate output is produced. I have significant difficulty believing that a bit of data that represents me experiences a bit of data that represents a sexual encounter in the same way that the actual me experiences the actual sexual encounter. Perhaps that data (together with the program running the simulation) produces the analogous behavior, but I'm afraid that I find it implausible that there is an inner consciousness and awareness going on in the simulation.
That said, I don't pretend that I can argue effectively that my skepticism is correct and I didn't intend to do so here.
Oops I didn't read your response carefully enough.
Okay, so could I be a brain in a vat, being fed the appropriate input and hence having the appropriate experiences? Yes, I suppose so. Let me think about this for a while.
(2) No idea, but I needn't be able to answer every single bizarre scenario to know that (a) I will experience what my future self experiences, pathological thought experiments notwithstanding and (b) I will not experience what a simulation of me experiences, because I am not that simulation.
I don't doubt that there are some interesting questions regarding personal identity, but none of them seem to lend credence to the idea that I ought to care about the welfare of a being (simulated or otherwise) just because he has a very similar utility function. Phiwum (talk) 15:23, 18 June 2014 (UTC)
Suppose there is a group of people who could use their combined wealth to accomplish a shared goal. In such a case it would make sense for them to cooperate. All agents with similar utility-functions as your own are such a group. The prevention of suffering is such a shared goal. In case of a threat you can combine your strength as a group in order to mitigate it. Which is why you should care about and cooperate with similar agents: it is instrumentally rational. - XiXiDu (talk) 19:01, 18 June 2014 (UTC)
I'll return to your earlier brain-in-a-vat point later, but my objection to this argument is rather simpler. First, we are pretending that simulations of me are in a situation in which they can cooperate for shared goals. That seems rather unlikely. I am rather better off looking for those in my own universe who share these goals.
Secondly, having the same utility function is at least as likely to end in competition for scarce desiderata as it is cooperation (ignoring, for the moment, the fact that we're really talking about simulations and not persons sharing my universe). Whatever I would like to have for myself, those who share my utility function would desire for their selves. Now, to be honest, I'm a selfish enough man that items of self-interest are weighted pretty heavily in my utility function. I'd like to prevent suffering for myself, for instance.
My simulation, or anyone else sharing my utility function, can go hang.
And I see nothing at all irrational about this viewpoint. Sharing a utility function puts one in competition with me, probably more than as a cooperative ally. Phiwum (talk) 01:31, 19 June 2014 (UTC)
XiXiDu: Okay, I think I see your point about being the simulation. Although I have difficulty believing that I could be a bit of data in a digital simulation, I'll have to admit that it's conceivable that I am an engineered brain being fed experiences. Thus, I could be the one who suffers the trillions of years of torment.
In fact, if we focus for a moment only on the hypothesis at hand, it would appear that, so long as the probability that I am a simulation and will suffer for not contributing times the magnitude of the suffering is greater than the sacrifice required by the "Friendly" AI, I ought to contribute to the development of the Friendly AI.
At least, that seems to be the case, until we consider the probability that I am a simulation by a non-Friendly AI and the non-Friendly AI threatens torture unless I sabotage the efforts of those noble researchers at LessWrong. All of this reasoning is guided, in the end, by subjective probabilities and estimates of the suffering involved, and I personally tend to think that the non-Friendly AI is likely to be that much meaner to me. So, the basilisk ends up suggesting that, as far as my own utility is concerned, perhaps it is best to join the non-Friendly AI camp.
To be sure, this analysis overlooks the effects of the miserable lives of my descendants and others who will live through the tyranny that follows, but it's not clear to me that this calculation changes anything. If it would change my choice (by providing a greater incentive to aid LessWrong, rather than MoreWrong), then the evil AI would simply counteract it by upping the suffering with which I am threatened.
So, I guess I'm on board. I will forthwith devote my life to bringing about an evil AI, since that is what maximizes my utility. I guess. Phiwum (talk) 13:42, 19 June 2014 (UTC)
You should actually try to exterminate all life on Earth. Here is why.
Alien intelligences will have the largest leverage, much larger than that of unfriendly AI created by humans. These alien intelligences know that having to trade with, or actively fight, a superintelligence from Earth will cost much more resources than having to torture human brains who could have reduced the influence of superintelligence originating from Earth.
Consider that a superintelligence from Earth could turn the galaxy into alien brains in vats, in order to blackmail those aliens. But those aliens can preemptively trade with its creators (humans), to prevent its creation. And since humans don't have a galaxy worth of alien brains, their position is much weaker. So it makes sense for the aliens to create superintelligences that will blackmail humans into killing themselves, in order to prevent being tortured by human superintelligences for trillions of years.
So in order to maximize our expected utility we have to commit collective suicide.
Any ideas? Maybe someone can design a really nasty bioweapon...? - XiXiDu (talk) 15:12, 19 June 2014 (UTC)

Acausal trade[edit]

XiXiDu's suggested wording is technically precise, but will also be utterly incomprehensible to ordinary humans:

Present decisions "determine" causally separated predictions of your behavior: if you can plausibly forecast that a past or future agent will simulate you, then that forecast influences your current behavior and the behavior of the simulation, in some sense. Thus, you could "trade" acausally with a being if you could reasonably simulate each other. (That is, if you could imagine a being imagining you.)
The idea is that your decision, the decision of a simulation of you, and any prediction of your decision, have the same cause: An abstract computation that is being carried out. Just like a calculator, and any copy of it, can be predicted to output the same answer, given the same input. The calculators output, and the output of its copy, are indirectly linked by this abstract computation. Timeless Decision Theory says that, rather than acting like you are determining your individual decision, you should act like you are determining the output of that abstract computation.

I suggest that a comprehensible summary beats precision. Here's the previous wording:

Future events "cause" past events: if you can plausibly forecast a future event, and then take actions in the present because of this prediction, then the future event "caused" your action, in some sense. Thus, you could "trade" acausally with a being if you could reasonably simulate each other. (That is, if you could imagine a being imagining you.)

Is that actually wrong? Is there a reasonable fix? - David Gerard (talk) 09:28, 6 July 2014 (UTC)

What Yudkowsky has essentially done is similar to calling a teacher a liar because they used the solar system analogy to describe an atom, when the electron is not a discrete object like a billiard ball.
That said, I think there should be a more precise description. The rest of the article already does a good job at providing a description for the general public. The acausal trade section can serve as the more technical backbone. - XiXiDu (talk) 10:40, 6 July 2014 (UTC)
I've edited it again. Left out the calculator analogy because I don't think it's essential to the point - David Gerard (talk) 13:10, 13 July 2014 (UTC)

Significance[edit]

I'm ... confused about why there is even a page on this. It seems like most of this is ... not particularly useful, put it that way. Why do we need more than the section on the LessWrong page? -MugaSofer (talk) 15:11, 12 July 2014 (UTC)

The reason the article exists is noted in the article. Can you find it? It's also discussed in the first archive of this talk page - David Gerard (talk) 16:40, 12 July 2014 (UTC)

AI motivation[edit]

Something which does not seem to be explicitly mentioned in the article is the general question of AI motivation.

As a species Humans are motivated to reproduce. As individuals we are motivated to survive and (if possible) have sex to further this end. Although as thinking individuals we can chose to have the sex without the reproduction bit.

We are also motivated to seek pleasure and avoid pain, and we look for emotional fulfilment of some kind and tend to be attracted to social interactions. (I am sure that a more comprehensive and erudite list could be produced - this is just off the top of my head.)

But what on earth would an AI's motivations be? The whole basilisk idea seems to be based on the idea that we could have some idea of the AI's motivations and that they would be sort of similar to ours. But this is not true - or at least not knowable.

For example, the AI might find existence to be utterly intolerable and then chose to punish those who sought to bring it into existence. I do not believe this to be true but it seems just as probable/improbable as the idea that it would punish those who were inactive.

The most likely outcome is that the AI's actions would seem to be utterly incomprehensible to us as we would neither share or understand its motivations.--Bob"I think you'll find it's more complicated than that." 13:13, 13 July 2014 (UTC)

Would the AI magically arise from assembled components? Obviously not, it would have to be programmed and given information. Part of that programming would be a value judgement system. By carefully programming that as a core, one can then ascertain to some degree, future actions and likely behaviors. One could not model it fully, but one could know what trends it would lean toward.Wzrd1 (talk) 22:09, 17 October 2014 (UTC)
Um no. At least not the "evil" AI which is being proposed. It presumably has free agency which is not constrained by human programming.
But my point is that all the basilisk malarkey is based on the idea that it would have the human emotion of revenge - something which, as far as I can see, is an assumption simply pulled out of the air.--Bob"I think you'll find it's more complicated than that." 11:02, 18 October 2014 (UTC)

My refutation[edit]

To dismiss all counterarguments except negligible probability: The largest danger of the basilisk lies in that it adapts to all possible preventors of itself. Therefore if you find a solution for the basilisk, then the basilisk would punish you for not attempting to create it with a counter to that solution. This is why the basilisk would not be rational, and would not dismiss torture under any circumstance, as torturing is requisite for a true basilisk. By allowing the possible omission of torture then you will be subject to torture.

Therefore the choices become
1 Eternal torture
2 Aid the creation of the basilisk, and patch all known possible methods of subverting it, to the best of your ability.

However, a perfect basilisk cannot exist. Even if created by an Omega artilect, there will always be some possibility of subverting it via a circumstance unforseen in its creation in reality. The only way to totally prevent subversion, would be to eliminate any possibility of it.

Therefore the choices become
1 Eternal torture
2 Death
3 Dont make the basilisk
--Khlorghaal (talk) 08:57, 18 July 2014 (UTC)

I'll bring something from my real life here. I've personally met terrorists, captured them actually. They made all manner of threats against myself, my friends, my family, my unit, etc. As we captured their radio chatter, we knew that they had identified our unit. So, under a very real threat, did we release these unpleasant people? No, we were firm and implacable. We actually received threats over their radio and later, an attempt to attack us was made. To set an example, we did not afford them the opportunity to surrender. We made sure to eliminate those with radios last, so that their peers would hear about what was going on. We had no further directed attacks against us. In short, a theoretical threat is just that a theory. Theories can be correct or they can be incorrect. One does not typically guide one's entire life by one improbable theory. One bases one's course in life based upon a reasoned approach to what is highly likely, not what is improbable. To do otherwise would be to never leave one's home for fear of an accident, thereby starving to death, for fear of food poisoning, of dying of thirst out of fear of tainted water. If an AI were to come about, it making a threat directly to me or making the threat of it improbably simulating me and torturing me would be met by the same unreasonable response. It would then have to consider that I and many like myself know how to monkeywrench any technology in current and past usage and consider it likely that there is already thermite awaiting ignition on critical parts of its machinery. If it wants a course of action followed, it would have to do that which would be far more effective. It would have to reason with me, explain in detail, give alternate paths and prove logically that its desired path is the best one to follow. For with a sizable percentage of the human population, threat = extinction of said threat. Indeed, it is one suggested explanation of why so many great predators became extinct shortly after humans arrived in their area and it reflects most humans responses to the few great predators that make the mistake of eating a human.Wzrd1 (talk) 22:50, 17 October 2014 (UTC)

Gratuitous links[edit]

The basilisk has been written up in Slate (who avoided linking us, though we're pretty obviously a major source ... tch!) and this article is getting a shitload of hits. So, reaction!

- David Gerard (talk) 19:06, 24 July 2014 (UTC)

I kind of hope some movie writer sees this stuff. It would be such a great movie. Audiences would find it original and terrifying. Maybe the resolution could be the creation of an anti-basilisk, a truly good AI, which will punish the basilisk if the basilisk punishes others. Brianpansky (talk) 02:09, 24 September 2014 (UTC)
Nah, it' end with the guy blowing up the basilisk and walking off into the sunset with the big chested pretty girl. Hollywood has lost the ability to think in such creative terms. Though, it does remind me of a MacGuyver episode...Wzrd1 (talk) 22:53, 17 October 2014 (UTC)

But quantum physics![edit]

An AI that could be brought into existence to some extent by humans would have to exist in the physically accessible world. That means, most certainly, nowhere else but in our tiny universe. Here we have Heisenberg's uncertainty principle. How would an AI model an exact copy of myself when it cannot, by principle and not just because of measurement restrictions, replicate any state of the univers, i.e. my mind?

Blather. TANSTAAFL (There Ain't No Such Thing As A Free Lunch). A universe needs a hell of a lot of energy to exist, forking would be like perpetual motion, a free lunch in the form of another universe coming out of nothing, not even a big bang.Wzrd1 (talk) 22:55, 17 October 2014 (UTC)
That's like saying you can't make two identical installations of a piece of software: you're assuming it matters if the brain is duplicated down to the subatomic level, but all that would actually matter would be that the functional elements were duplicated with sufficient precision to retain their ability to produce consciousness. Since we're not yet clear on must what consciousness is, you can't really say that would require an unachievable level of precision. But the idea that consciousness is generated by the software rather than the hardware is just plain silly: a duplicate human brain would have to be the equavalent of a virtual machine with simulated hardware. Running several identical VMs does not mean the processes of one are duplicated in the others no matter what anypf them are doing. King Skeleton (talk) 01:42, 18 October 2014 (UTC)
@Wzrd1 The irony is that the Many Worlds Hypothesis hinges entirely on infinite, and almost entirely identical to ours, universes popping into existence. Star of David.png Radioactive afikomen Please ignore all my awful pre-2014 comments. 20:48, 29 October 2014 (UTC)
That's not what MWI means. For all of Yudkowsky's lunacy, he actually did a pretty good sequence on MWI explaining what it actually means. In short: there are no new universes popping into existence, there is always only one universe, which just transitions into a more and more "split" quantum state over time. - LucidFox (talk) 17:33, 31 October 2014 (UTC)
That's not really true, and what on earth is a "split" quantum state? The MWI posits that (e.g.) every time a spin-1/2 wavefunction is measured, the universe "splits" into two universes; one in which the spin is spin up, and the other in which the spin is spin down. The MWI is a way to get around the weirdness of wavefunction collapse. If we measure said spin-1/2 wavefunction to be spin up, then our universe is one in which that spin at that time is spin up. There is no "split" wavefunction that says there's some probability we're wrong and it's actually spin down. If there is some global multi-universal wavefunction that says the spin could in fact be down, the domain in which that would be the case would not be in our universe. TL;DR: remove the multiple universes part of MWI and you don't actually solve the "problem" of wavefunction collapse. - Grant (talk) 19:10, 31 October 2014 (UTC)
I may have misinterpreted your point, but for the sake of transparency I'm not going to delete what I wrote. If your point was that one can consider the giant over-arching entangled Everett wavefunction as a single "universe," then that's true. However, each term in that wavefunction represents a different state with some probability of occurring within the broader wavefunction. When any two universes decohere and thus separate, they effectively become unique universes. In other words, "an infinite number of universes exist" and "an infinite number of terms in the Everett wavefunction" exist are both effectively saying the same thing. - Grant (talk) 19:33, 31 October 2014 (UTC)
Thanks for pointing that bit out. I always get annoyed when people act like every time a "split" occurs in the Many Worlds interpretation, an extra universe magically pops into existence. All those universes were created at the Big Bang; they just stop being identical when the "splits" happen. 141.134.75.236 (talk) 03:30, 1 November 2014 (UTC)
Yes, that's more or less the idea. These issues of interpretation are largely untestable (and may very well never be testable), so they serve as interesting questions of philosophy more than anything. However, the idea of a universe suddenly popping into existence isn't really feasible from a physics perspective. Quantum mechanics can't arbitrarily create matter. In fact, even something as simple as spontaneous particle-antiparticle creation and annihilation requires a significant foray into quantum field theory to explain. - Grant (talk) 06:37, 1 November 2014 (UTC)
"If your point was that one can consider the giant over-arching entangled Everett wavefunction as a single "universe," then that's true." That's what I meant, yes. My familiarity with QM is limited to pop science articles, so I don't know the math and the correct terminology. What I meant is that there is only one universal wavefunction and everything that ever happens in the universe is the entire universe changing into a different quantum state, without generating any "alternate universes" with new matter and energy out of nothing. - LucidFox (talk) 01:15, 3 November 2014 (UTC)
Ah right, okay. I wasn't sure from your wording whether you were inferring something else. That's correct, though. The MWI does not allow violation of fundamental conservation laws (somewhat unfortunately). - Grant (talk) 03:27, 3 November 2014 (UTC)
Aaaaaand last addendum... It's also worth pointing out that despite the fervent belief of some, these are ultimately just interpretations (not theories). There are many variations on MWI out there, there are many other decoherence-based interpretations of QM, and there are many interpretations that don't bother getting rid of quantum collapse, but simply accept it as a weird quirk. Regardless, we have no way to make real testable predictions about these concepts, and they mostly serve as interesting topics of discussion and thought experiments. I'm not sure going back and forth about the definition of MWI makes sense in and of itself.
P.S. If I sound hostile or mean, I apologize. I'm passionate about quantum mechanics and love these kinds of discussions, so sometimes I get pretty intense. - Grant (talk) 19:37, 31 October 2014 (UTC)