Existential risk

From RationalWiki
Jump to navigation Jump to search
Poetry of reality
Science
Icon science.svg
We must know.
We will know.
A view from the
shoulders of giants.

Existential risk (sometimes abbreviated to X-risk) is the term for scientifically plausible risks that may cause the entire human race to become extinct.

Such risks are best studied so we can identify and avoid them. However, we must be careful not to overemphasize risks that really are implausible, at the expense of addressing other serious problems our civilization faces.

Suggested existential risks[edit]

  • Large-scale nuclear war. Although this is obviously not something that most people want to be empirically tested, some have argued that large-scale nuclear war would kill off all of humanity. The extinction would likely not happen via the direct casualties, nor even via radiation poisoning, but via nuclear winterWikipedia wiping out the food chain. Recent modelling supports the view that the food chain would be degraded for an extended period of time — possibly long enough for billions of people to starve to death.
  • Asteroid. Civilisation-destroying, very large asteroids (or more generally, near-Earth objects). NEOs come in various different sizes, and only the largest would have the strength to take out the entire human species.
  • Cosmic threats. Events such as a nearby supernova, gamma ray burst, near encounter with a large wandering object (planet, star, blackhole), etc, could directly wipe out all life or disrupt the solar system so badly that the earth is no longer able to sustain life.[1]
  • Unfriendly artificial intelligence. An artificial general intelligence deciding that humanity is an impediment or superfluous to meeting its goals. Mainly claimed by non-computer science experts, such as Elon Musk and Stephen Hawking, and others who promote the idea of an 'intelligence explosion'. Also claimed by some computer scientists like Stuart Russel.[2]
  • Pandemic. Theoretically, a worldwide pandemic to which humanity lacks immunity, perhaps exacerbated due to the spread of disease vectors via air travel and hostile denialism politics (like covid-19 denialism). Some worry such a pandemic may be created via synthetic biological technology.
  • Climate change. Climate change is a real problem in its own right, but some have suggested that it represents a threat to the entire human species.
  • Runaway nanotechnology. It is feared that self-replicating nanomachines could consume all of the biosphere, including us. This is known as the grey goo scenario.
  • Cosmological phase transition. A variety of hypothetical mechanisms have been posited, generally rooted in quantum physics, which could destroy the entire universe. Vacuum decay, the transition from a metastable to a more stable base state of vacuum, and a phase change propagating through all the matter of the universe are suggested mechanisms. Kurt Vonnegut portrayed a smaller-scale fictional version around "Ice-9" which could theoretically cause all the water in the world to change state and render it unable to support life, but that's just fiction, and physicists worry about something affecting the entire universe.[3][4]
  • Religious prophecy. Though not generally mentioned by the scientifically-minded like the previous ones are, these apocalyptic predictions have consistently gotten followers for thousands of years.

Plausibility and potential solutions[edit]

Large-scale nuclear war[edit]

This is obviously a really, really bad thing, but whether such a nuclear war would actually cause the extinction of every last human is debatable. More information can be found at Wikipedia's page on nuclear holocaust. Whether such a thing occurs depends solely on human factors such as politics and diplomacy. The most effective course of action for individuals would be to vote often, vote for the party/candidate who will promote good international relations rather than national pride and machismo, as well as persuading their governments to take these risks seriously, as defending the lives of its citizens is a fundamental duty of a state.

Asteroid[edit]

Although asteroid risks are relatively easy to get a handle on, it is impossible to come up with an accurate probability because the frequency of asteroid impacts is not accurately known. Nevertheless, the existential risk posed by asteroids is very, very tiny over human time-frames. This is among the best-understood and best-monitored of all existential risks — NASA carefully tracks all known NEOs that would be large enough to cause the worst damage — although more research is needed on deflection techniques. Billionaire Elon Musk, among others, has apparently serious plans to set up a very large self-sustaining space colony on Mars, and to do it entirely with private money (well, if you ignore the shedloads of money his company SpaceX is getting from the US government, which is essentially subsidising his rocket R&D). Such an off-Earth colony might be a very effective measure to prevent a civilisation-destroying large asteroid from making humanity go extinct, even if deflection attempts are unsuccessful and the asteroid does strike the Earth. Also, establishing a colony on a possibly sterile Mars would be difficult enough, but creating a long-term self-sustaining Martian colony that is independent of Earth is probably at least an order of magnitude more difficult.

Cosmic threats[edit]

It isn't clear exactly how damaging a nearby supernova would be, and predictions of the frequency of supernovae vary significantly. One estimate suggests that there is a supernova within 10 parsecs (33 light years. For the record, the currently closest knownWikipedia supernova candidate is 155 light years away), the maximum distance for one to be really harmful for the biosphere, that is estimated every 240 million years; other estimates range from 100 million years to 20 billion years. Gamma ray bursts are extremely energetic, but only last a short period of time, and therefore damage would be limited — one might strip half of the ozone layer, requiring a few years to build up again, but it would be unlikely to cause total extinction immediately. There is little clear evidence of such events causing major damage in earth's past, although the Ordovician–Silurian extinction has been attributed to a supernova or gamma ray burst by some.[5][6] It would be virtually impossible to prevent such an event (we may be able to predict the behavior of well-studied stars, but there is the possibility of either an unobserved pair of white dwarfs/neutron stars or an undiscovered binary system with a white dwarf close enough colliding and going supernova), and current technology would be helpless even with advanced knowledge). Solutions to such problems would involve leaving the earth, although it may be necessary to move a significant distance away from the entire solar system. Alternatively, we could sit tight, but without interstellar travel our options are very limited.

Unfriendly artificial intelligence[edit]

Aside from the inherent problems in any idea of 'intelligence explosion', it seems likely that the so-called 'value alignment' problem in AI will be solved well before they reach general intelligence. For example, a cleaning robot must be able to learn what humans consider dirt and garbage, and what they consider valuable property, to be useful. A psychopathic AI which cannot learn and internalize human goals is of no use and will not be developed. Only 8% of respondents of a survey of the 100 most cited authors in the AI field considered AI to present an existential risk.[7] If you're still worried, you have folks like Elon Musk, who is a founder of OpenAI and who recently donated US$10 million towards AI safety research.[8] However, donating money to AI safety research may be wasteful or even counterproductive; for example, effective altruist (EA) charity evaluator GiveWell has in fact recommended against giving to MIRI,[9] much to the consternation of some in the LessWrong camp who believed — and still believe — that the EA movement would be a useful Trojan horse for getting more donations sent MIRI's way.

Pandemic[edit]

For much of human history, pandemics were the most plausible existential risk to mankind. However, our increased understanding of disease and sanitation have gone a significant way towards decreasing this risk. Knowledge of basic mechanisms of disease spread, and how to identify these mechanisms and act to minimize them, go a long way towards decreasing the risk of large-scale outbreaks, while the presence of antibiotics and vaccines make treating of potential viral strains more plausible. Drastic measures to contain a pandemic would doubtless be employed by governments as soon as they became aware of the nature of the threat.[citation needed] After the 9/11 attacks, the US government temporarily shut down all air travel over the continental United States, and an X-risk pandemic would obviously be much more dangerous than a small group of airplane hijackers.

One common concern cited by those fearing pandemics is the over-use of antibiotics which risks encouraging antibiotic-resistant strains to develop. It would be difficult for a single bacteria to develop immunity to all forms of antibiotics, owing to the large number of different ones available, but were a bacteria to develop immunity to all known forms, humanity would be limited to 'old-fashioned' means of disease prevention, such as the use of quarantine and mitigating potential vectors that could spread the disease. However, historical evidence suggests that naturally evolved viruses, even if immune to antibiotics, would be unlikely to reach the level of risking eradicating human life. The time period between the discovery and implementation of proper sanitation and the development of antibiotics shows that proper sanitation alone significantly decreased the loss of life due to disease even without antibiotics. With superior medical knowledge and resources available, modern man should ideally be better suited to preventing the spread of even a hypothetical antibiotic-immune disease.

Arguably the most probable cause of a life-threatening disease would be genetic engineering, which could theoretically create a disease that is immune to all known countermeasures, as well as develop techniques to increase the lethality of the disease or make the disease harder to contain, such as an extended dormant phase where an individual is infectious but not obviously ill, that are unlikely to develop through standard evolutionary means.[note 1] Lucky for humanity, biology is way more complicated than many sci-fi fans assume, so engineering a pathogen is very, very difficult, and out of reach for a small band of terrorists. Furthermore, most people capable of creating such a theoretical engineered pathogen are unlikely to wish to kill themselves, their loved ones, and all of humanity.[citation needed] There are methods that can be put into place to limit the potential spread of engineered diseases that most experienced genetic engineers are ideally using to prevent this sort of scenario.

Climate change[edit]

Climate change is currently not thought to be an existential risk, at least within the next 100 years — although more research is needed on worst-case climate scenarios. The non-existential reality is bad enough, though. Hypothetically, a runaway greenhouse effectWikipedia is a situation where a planet gets hotter and hotter through a positive feedback loop until all the oceans boil off and there is no possibility of sustaining life, as happened on Venus. However, this is considered virtually impossible on the Earth.[10]

Runaway nanotechnology[edit]

Eric Drexler, who popularized the idea of nanotechnology, points out that grey goo (nanotech that accidentally eats everything on the planet and turns it into goo) is not an existential risk because it is not a realistic risk at all — although this does not rule out more deliberate uses of nanotechnology for military ends. This is aside from the various fundamental problems with nanotechnology itself.

Cosmological phase transition[edit]

It isn't clear how likely it is, and it depends on a full understanding of subatomic physics.[11] The fact that it has not happened in the last 14 billion years suggests it isn't terribly likely, and some estimations putting it in the very distant future in cosmological terms, but if it did happen, we'd be shit outta luck. More info here.

Religious prophecy[edit]

Religious and other mystical claims of the impending end of the world are based on unverifiable visions, "voices from God" which conveniently only one person can hear, or unique interpretations of holy books, and in particular, numerology. Sometimes, they involve believers handing over all their savings to the person making the warning, who then conveniently decides to keep the money after the predicted end of the world fails to materialise.

Stop technological progress?[edit]

Some people, such as Sun Microsystems' former chief scientist Bill Joy[12] and MIRI's former Director of Research Ben Goertzel,[13] have argued that in order to avoid existential risks, we ought to halt the march of technological progress, to a greater or lesser extent, either temporarily or permanently. A tiny minority (not necessarily acting out of concern over existential risk) have even decided that it is appropriate to resort to violence to achieve their aims of stopping certain technologies.[14]

However, it is almost impossible to achieve technological relinquishment in any useful (i.e. global) way, even if it was considered desirable. Even if America and Europe both ban a technology, if it is useful, one or more countries which face different cultural, political, and economic constraints, such as China for example, will probably eventually develop it, and quite likely out-compete those who don't adopt the technology. We are better off exploring other options.

In addition, while it is true that technology itself causes new problems (nuclear proliferation), it is the only solution to old problems (famine). Of all the existential risks considered here, only asteroids are known for sure to be a real risk and capable of wiping out entire species, and the only possible solutions to this existential risk involve high technology.

While the machines rising up against their masters is a common sci-fi trope, in reality, there is no incentive to build any machine or AI that is not a tool in humanity's hand, with no will of its own other than what humans give it.[15]

Reducing Existential Risk[edit]

Various organizations are committed to reducing or spreading awareness about Existential Risk. One of them is Nick Bostrom's Future of Humanity Institute at Oxford University. Nick Bostrom wrote an article known as 'Astronomical Waste'. He argued that the continued existence of human civilization has immense moral value. Vast populations of humans could exist with space colonization, and their lives would be happy because of advanced technology. So, reducing existential risk is the most valuable cause. [16]

In 2014, an organization known as the Future of Life Institute was established. Its goal is somewhat like FHI. Its founders included the scientist Max Tegmark, Skype co-founder Jaan Tallinn, etc. In 2018, its scientific advisory board included Elon Musk and Stephen Hawking.[17] Their primary goal has been to spread awareness about AI 'risk'. They distributed 2 million dollars to 10 researchers who they deemed to be carrying out AI risk-reducing research.[18] On the other hand, they don't believe that arrival of human-level AI is imminent and say that it is decades away or might not even happen in the 21st century. But they focus on AI risk because solving an AI control problem will take a long time.[19]

In 2012, at Cambridge University, Centre for the Study of Existential Risk was established. The founders included the above-mentioned Jaan Tallinn and Lord Martin Rees (Astronomer Royal). They have collaborated with FHI. [20] [21] Other such organizations include Global Catastrophic Risk Institute,[22] X-Risks Institute[23]Saving Humanity from Homo Sapiens,[24] Lifeboat Foundation[25], Foresight Institute,[26] and Skoll Global Threat Fund.[27].

Many organizations are combatting specific forms of existential risk. Those combating or claiming to combat AI-related x-risk only include:
(i) Centre for Human-Compatible AI[28]
(ii) Machine Intelligence Research Institute[29]
(iii) Leverhulme Centre for the Future of Intelligence[30]

See also[edit]

Notes[edit]

  1. A mutation can make an existing cotangent lethal, but it's unlikely to make it lethal only after x days of being non-lethal. In (overly-simplified) reasoning, if a cotangent is proving successful at spreading itself without harming its host, it has little motivation to switch to killing its host, since that will prevent the host from further spreading the contagion once dead.

References[edit]

  1. See the Wikipedia article on Global catastrophic risk.
  2. https://people.eecs.berkeley.edu/~russell/research/future/
  3. Vacuum decay: the ultimate catastrophe, Cosmos Magazine, Sep 14, 2015
  4. Q: Could Kurt Vonnegut’s “Ice-9 catastrophe” happen?, Ask A Mathematician: Ask A Physicist, Nov 3, 2012
  5. See the Wikipedia article on Near-Earth supernova.
  6. Gamma Ray Burst Mass Extinction, Cosmos: Study Astronomy Online at Swinburne University, accessed 18 Mar 2019
  7. https://nickbostrom.com/papers/survey.pdf Muller & Bostrom, Future Progress in Artificial Intelligence: A Survey of Expert Opinion
  8. https://futureoflife.org/2015/10/12/elon-musk-donates-10m-to-keep-ai-beneficial/, Future of Life, Oct 12, 2015
  9. "Thoughts on the Singularity Institute." MIRI was formerly called the Singularity Institute for Artificial Intelligence.
  10. See the Wikipedia article on Tipping points in the climate system.
  11. Cite error: Invalid <ref> tag; no text was provided for refs named physorg
  12. See the Wikipedia article on Why The Future Doesn't Need Us.
  13. Goertzel, Ben. Should Humanity Build a Global AI Nanny to Delay the Singularity Until It's Better Understood? Journal of Consciousness Studies, 2012
  14. A luddite link to nano-terrorists, Michele Catanzaro, The Guardian,Fri 8 Nov 2013
  15. Three Arguments Against the Singularity, Charlie Stross, Antipope.org
  16. https://nickbostrom.com/astronomical/waste.html
  17. https://en.wikipedia.org/wiki/Future_of_Life_Institute
  18. https://futureoflife.org/ai-safety-research/
  19. https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
  20. https://www.cser.ac.uk/
  21. https://www.lesswrong.com/posts/2idJBvzzj3dP36HSA/update-on-establishment-of-cambridge-s-centre-for-study-of
  22. http://gcrinstitute.org/
  23. https://ieet.org/index.php/IEET2/more/torres20151030
  24. http://shfhs.org/whatarexrisks.html
  25. https://lifeboat.com/ex/programs
  26. https://foresight.org/about-us/our-mission/
  27. http://www.skollglobalthreats.org/about-us/mission-and-approach/
  28. https://humancompatible.ai/
  29. intelligence.org
  30. http://lcfi.ac.uk/