RationalWiki's 2018 Fundraiser

There is no RationalWiki without you. We are a small non-profit with no staff — we are hundreds of volunteers who document pseudoscience and crankery around the world every day. We will never allow ads because we must remain independent. We cannot rely on big donors with corresponding big agendas. We are not the largest website around, but we believe we play an important role in defending truth and objectivity.

If everyone seeing this today donates $5, we will meet our goal for 2018.

Fighting pseudoscience isn't free.
We are 100% user-supported! Help and donate $5, $20 or whatever you can today with PayPal Logo.png!

Donations so far: $3383Goal: $5000

Existential risk

From RationalWiki
Jump to: navigation, search
Warning icon orange.svg This page contains rather too many unsourced statements, and needs to be improved.

Existential risk could use some help. Please research the article's assertions. Whatever is credible should be sourced, and what is not should be removed.

Existential risk (abbreviated to X-risk) is the term for scientifically plausible risks that may cause the entire human race to become extinct.

More broadly, it may also be used to refer to risks of the destruction of all (human-intelligence-level-or-above) intelligent beings in some sphere of interest, such as the Earth, or the Universe. While at present the two definitions amount to exactly the same thing,[1] they would start to refer to different things if and when artificial intelligences that were as smart as humans came onto the scene. A moment's thought, however, shows that a definition of existential risk which only cares about intelligence as a whole is not a very satisfactory definition of what we should care about. The moral atrocity that AIs would commit if they deliberately killed or even just out-competed (for habitat) all humans is bad enough. However, even leaving that aside, a future in which an AI takes over the world and spends its days tiling the Universe with paperclips, or pop art, or portraits of a North Korean dictator or whatever — leaving humans with nowhere to live — is not one that anyone should want, regardless of how much they like robots. The latter class of scenario is one of the motivations for AI safety research being undertaken by organisations such as Machine Intelligence Research InstituteWikipedia's W.svg (MIRI).

Examples of existential risks[edit]

  • Although this is obviously not empirically testable, some have argued that large-scale nuclear war would kill off all of humanity. The extinction would likely not happen via the direct casualties, nor even via radiation poisoning, but via nuclear winterWikipedia's W.svg wiping out the food chain. Recent modelling supports the view that the food chain would be degraded for an extended period of time — possibly long enough for billions of people to starve to death.
  • Civilisation-destroying, very large asteroids (or more generally, near-Earth objects). NEOs come in various different sizes, and only the largest would have the strength to take out the entire human species. This is among the best-understood and best-monitored of all existential risks — NASA carefully tracks all known NEOs that would be large enough to cause the worst damage — although more research is needed on deflection techniques.
  • Theoretically, a worldwide pandemic to which humanity lacks immunity — due to the spread of disease vectors via air travel — although drastic measures to contain a pandemic would doubtless be employed by governments as soon as they became aware of the nature of the threat. After the 9/11 attacks, the US government temporarily shut down all air travel over the continental United States, and an X-risk pandemic would obviously be much more dangerous than a small group of airplane hijackers.
    • A relatively novel X-risk is a pandemic created via synthetic biological technology. Unlike with some other X-risks, there are currently no organisations dedicated to investigating this threat and what we might be able to do about it.
  • An artificial general intelligence deciding that humanity is an impediment or superfluous to meeting its goals. Though it is disputed whether this is an X-risk we need to worry about in the short term (many actual AI researchers don't think so), it probably is in the long term. This is because the number of scientists and academics who think we will never be able to reproduce human consciousness in a machine is quite small, and even they aren't necessarily sure that their arguments are true, or relevant (AIs might not need to be either conscious or human-like to be dangerous, for example).

Not existential risks[edit]

Scientific[edit]

Climate change is currently not thought to be an existential risk, at least within the next 100 years — although more research is needed on worst-case climate scenarios. The non-existential reality is bad enough, though.

Molecular nanotechnology founder Eric Drexler claims that grey goo (nanotech that accidentally eats everything on the planet and turns it into goo) is not an existential risk because it is not a realistic risk at all — although this does not rule out more deliberate uses of nanotechnology for military ends.

Other[edit]

Religious and other unscientific claims of the impending end of the world are not considered existential risks, as they are not based on scientific evidence. Usually they are based on unverifiable visions, "voices from God" which conveniently only one person can hear, or unique interpretations of holy books — and in particular, numerology. Sometimes, they involve believers handing over all their savings to the person making the warning, who then conveniently decides to keep the money after the predicted end of the world fails to materialise.

Probabilities of existential risks[edit]

Although asteroid risks are relatively easy to get a handle on, it is impossible to come up with an accurate probability because the frequency of asteroid impacts is not accurately known. Nevertheless, the existential risk posed by asteroids is very, very tiny over human time-frames.

Existential risks which depend on human factors such as politics, diplomacy, and technological development are much harder to estimate, but at least some of them are probably worth worrying about now. Arguably by governments, though, not by individuals — the most effective course of action for individuals might be persuading their governments to take these risks seriously, as defending the lives of its citizens is a fundamental duty of a state.

Proposals to address existential risks[edit]

How we might address existential risks varies enormously depending on the nature of the risk. For example, billionaire Elon Musk has apparently serious plans to set up a very large self-sustaining space colony on Mars — and to do it entirely with private money (well, if you ignore the shedloads of money his company SpaceX is getting from the US government, which is essentially subsidising his rocket R&D). Such an off-Earth colony might be a very effective measure to prevent a civilisation-destroying large asteroid from making humanity go extinct, even if deflection attempts are unsuccessful and the asteroid does strike the Earth. However, such a colony would probably be of very little benefit against a genocidal superintelligence, which could target it with nuclear or simply kinetic weapons. Also establishing a colony on a possibly sterile Mars would be difficult enough, but creating a long-term sustaining Martian colony that is independent of Earth is probably at least an order of magnitude more difficult.

Unfortunately, even if AI boosters/doomsters are too optimistic/pessimistic about when superintelligence is going to be developed by several centuries, the superintelligence threat is likely to manifest itself long before the civilisation-destroying asteroid one does, due to the rarity of the latter. But Elon Musk has recently donated US$10 million towards AI safety research[2] (which includes an $80,000 grant to MIRI).

Addressing dangerous AIs[edit]

There were those of us who fought against it, but in the end we could not keep up with the expense involved in the arms race, the space race, and the peace race. At the same time our people grumbled for more nylons and washing machines. Our doomsday scheme cost us just a small fraction of what we had been spending on defense in a single year. The deciding factor was when we learned that your country was working along similar lines, and we were afraid of a doomsday gap.
—Ambassador de Sadesky in the film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb

The above reasoning, however, assumes that it's a case of a single superintelligence fighting against humanity — whereas there are other possibilities, such as one superintelligence that is opposed to most of humanity versus another that is allied with at least some countries or political factions. To defend against that scenario, as with nuclear weapons, the best way to defend against a (central-planning?) superintelligence controlled by North Korea or China, might be to invest in developing our own superintelligence as quickly as possible.

MIRI is one organisation that says it is researching ways to make AI "friendly", but just because it says the issue is important and it is working on a solution, there is no guarantee that MIRI will be able to find a solution or that the solution will in fact be "friendly" or even implementable. Effective altruist (EA) charity evaluator GiveWell has in fact recommended against giving to MIRI[citation needed] — much to the consternation of some in the LessWrong camp who believed — and still believe — that the EA movement would be a useful Trojan horse for getting more donations sent MIRI's way. However, many other effective altruists without LessWrong affiliations have judged in favor of donating to MIRI.

Relinquishing technological progress[edit]

Some people — such as Sun Microsystems' former chief scientist Bill Joy[3] and MIRI's former Director of Research Ben Goertzel[4] — have argued that in order to avoid existential risks, we ought to halt the march of technological progress, to a greater or lesser extent, either temporarily or permanently. A tiny minority (not necessarily acting out of concern over existential risk) have even decided that it is appropriate to resort to violence to achieve their aims of stopping certain technologies.[5]

However, it is almost impossible to achieve technological relinquishment in any useful (i.e. global) way, even if it was considered desirable. Even if America and Europe both ban a technology, if it is useful, one or more countries which face different cultural, political and economic constraints, such as China for example, will probably eventually develop it — and quite likely out-compete those who don't adopt the technology. We are better off exploring other options.

References[edit]

  1. No, aliens don't visit people. People claiming to have been abducted by aliens would have claimed to have seen angels in previous centuries.
  2. [1]
  3. See the Wikipedia article on Why The Future Doesn't Need Us.
  4. Goertzel, Ben. Should Humanity Build a Global AI Nanny to Delay the Singularity Until It's Better Understood? Journal of Consciousness Studies, 2012
  5. [2]

See also[edit]