Existential risk

From RationalWiki
Jump to: navigation, search

Existential risk (sometimes abbreviated to X-risk) is the term for scientifically plausible risks that may cause the entire human race to become extinct.

Such risks are best studied so we can identify and avoid them. However, we must be careful not to overemphasize risks that really are implausible, at the expense of addressing other serious problems our civilization faces.

Suggested existential risks[edit]

  • Large-scale nuclear war. Although this is obviously not empirically testable, some have argued that large-scale nuclear war would kill off all of humanity. The extinction would likely not happen via the direct casualties, nor even via radiation poisoning, but via nuclear winterWikipedia's W.svg wiping out the food chain. Recent modelling supports the view that the food chain would be degraded for an extended period of time - possibly long enough for billions of people to starve to death.
  • Asteroid. Civilisation-destroying, very large asteroids (or more generally, near-Earth objects). NEOs come in various different sizes, and only the largest would have the strength to take out the entire human species.
  • Unfriendly artificial intelligence. An artificial general intelligence deciding that humanity is an impediment or superfluous to meeting its goals. Mainly claimed by non-computer science experts, such as Elon Musk and Stephen Hawking, and others who promote the idea of an 'intelligence explosion.'
  • Pandemic. Theoretically, a worldwide pandemic to which humanity lacks immunity, perhaps exacerbated due to the spread of disease vectors via air travel. Some worry such a pandemic may be created via synthetic biological technology.
  • Climate change. Climate change is a real problem in its own right, but some have suggested that it represents a threat to the entire human species.
  • Runaway nanotechnology. It is feared that self-replicating nanomachines could consume all of the biosphere, including us. This is known as the grey goo scenario.
  • Religious prophecy. Though not generally mentioned by the scientifically-minded like the previous ones are, these apocalyptic predictions have consistently gotten followers for thousands of years.

Plausibility and potential solutions[edit]

Large-scale nuclear war[edit]

This is obviously a really, really bad thing, but whether such a nuclear war would actually cause the extinction of every last human is debatable. More information can be found at Wikipedia's page on nuclear holocaust. Whether such a thing occurs depends solely on human factors such as politics and diplomacy. The most effective course of action for individuals would be to vote often, vote for the party/candidate who will promote good international relations rather than national pride and machismo, as well as persuading their governments to take these risks seriously, as defending the lives of its citizens is a fundamental duty of a state.

Asteroid[edit]

Although asteroid risks are relatively easy to get a handle on, it is impossible to come up with an accurate probability because the frequency of asteroid impacts is not accurately known. Nevertheless, the existential risk posed by asteroids is very, very tiny over human time-frames. This is among the best-understood and best-monitored of all existential risks - NASA carefully tracks all known NEOs that would be large enough to cause the worst damage - although more research is needed on deflection techniques. Billionaire Elon Musk, among others, has apparently serious plans to set up a very large self-sustaining space colony on Mars, and to do it entirely with private money (well, if you ignore the shedloads of money his company SpaceX is getting from the US government, which is essentially subsidising his rocket R&D). Such an off-Earth colony might be a very effective measure to prevent a civilisation-destroying large asteroid from making humanity go extinct, even if deflection attempts are unsuccessful and the asteroid does strike the Earth. Also establishing a colony on a possibly sterile Mars would be difficult enough, but creating a long-term sustaining Martian colony that is independent of Earth is probably at least an order of magnitude more difficult.

Unfriendly artificial intelligence[edit]

Aside from the inherent problems in any idea of 'intelligence explosion,' it seems likely that the so-called 'value alignment' problem in AI will be solved well before they reach general intelligence. For example, a cleaning robot must be able to learn what humans consider dirt and garbage, and what they consider valuable property, to be useful. A psychopathic AI which cannot learn and internalize human goals is of no use and will not be developed. Only 8% of respondents of a survey of the 100 most cited authors in the AI field considered AI to present an existential risk.[1] If you're still worried, you have folks like Elon Musk, who is a founder of OpenAI and who recently donated US$10 million towards AI safety research.[2] However, donating money to AI safety research may be wasteful or even counterproductive; for example effective altruist (EA) charity evaluator GiveWell has in fact recommended against giving to MIRI,[3] much to the consternation of some in the LessWrong camp who believed - and still believe - that the EA movement would be a useful Trojan horse for getting more donations sent MIRI's way.

Pandemic[edit]

Drastic measures to contain a pandemic would doubtless be employed by governments as soon as they became aware of the nature of the threat. After the 9/11 attacks, the US government temporarily shut down all air travel over the continental United States, and an X-risk pandemic would obviously be much more dangerous than a small group of airplane hijackers. Biology is way more complicated than many sci-fi fans assume, so engineering a pathogen is very, very difficult, and out of reach for a small band of terrorists.

Climate change[edit]

Climate change is currently not thought to be an existential risk, at least within the next 100 years - although more research is needed on worst-case climate scenarios. The non-existential reality is bad enough, though.

Runaway nanotechnology[edit]

Eric Drexler, who popularized the idea of nanotechnology, points out that grey goo (nanotech that accidentally eats everything on the planet and turns it into goo) is not an existential risk because it is not a realistic risk at all - although this does not rule out more deliberate uses of nanotechnology for military ends. This is aside from the various fundamental problems with nanotechnology itself.

Religious prophecy[edit]

Religious and other mystical claims of the impending end of the world are based on unverifiable visions, "voices from God" which conveniently only one person can hear, or unique interpretations of holy books, and in particular, numerology. Sometimes, they involve believers handing over all their savings to the person making the warning, who then conveniently decides to keep the money after the predicted end of the world fails to materialise.

Stop technological progress?[edit]

Some people, such as Sun Microsystems' former chief scientist Bill Joy[4] and MIRI's former Director of Research Ben Goertzel,[5] have argued that in order to avoid existential risks, we ought to halt the march of technological progress, to a greater or lesser extent, either temporarily or permanently. A tiny minority (not necessarily acting out of concern over existential risk) have even decided that it is appropriate to resort to violence to achieve their aims of stopping certain technologies.[6]

However, it is almost impossible to achieve technological relinquishment in any useful (i.e. global) way, even if it was considered desirable. Even if America and Europe both ban a technology, if it is useful, one or more countries which face different cultural, political and economic constraints, such as China for example, will probably eventually develop it, and quite likely out-compete those who don't adopt the technology. We are better off exploring other options.

In addition, while it is true that technology itself causes new problems (nuclear proliferation), it is the only solution to old problems (famine). Of all the existential risks considered here, only asteroids are known for sure to be a real risk and capable of wiping out entire species, and the only possible solutions to this existential risk involve high technology.

While the machines rising up against their masters is a common sci-fi trope, in reality, there is no incentive to build any machine or AI that is not a tool in humanity's hand, with no will of its own other than what humans give it.[7]

References[edit]

  1. https://nickbostrom.com/papers/survey.pdf Muller & Bostrom, Future Progress in Artificial Intelligence: A Survey of Expert Opinion
  2. [1]
  3. "Thoughts on the Singularity Institute." MIRI was formerly called the Singularity Institute for Artificial Intelligence.
  4. See the Wikipedia article on Why The Future Doesn't Need Us.
  5. Goertzel, Ben. Should Humanity Build a Global AI Nanny to Delay the Singularity Until It's Better Understood? Journal of Consciousness Studies, 2012
  6. [2]
  7. Three Arguments Against the Singularity

See also[edit]