Five of the most serious threats to human existence Wednesday, 04 June 2014 04:00
Hooked hype about contemporary " crisis " facing humanity , we forget about the large number of generations , which we hope will come to replace us . We are not talking about the people who will live 200 years after us, and those who will live a thoUSAnd or ten thoUSAnd years from now . I use the word " hope " because we are faced with risks that are called existential and that can destroy humanity . These risks are not only a major disaster , such as disasters that can put an end to history.
However, not all ignore the distant future. Mystics like Nostradamus regularly try to determine the date of the end of the world. HG Wells tried to develop science and predictions , as you know, the future of humanity described in the book " The Time Machine" . Other writers come up with other options in the distant future in order to prevent , or to fantasize entertain .
But even if all these pioneers and futurists did not think about the future of humanity , the result would still have remained the same. Not many people could do before in order to save us from the existential crisis , or even to call it.
Today we are in a privileged position. Human activity gradually shaping the future of our planet. And although we are still far from being able to control the disaster, we are developing technologies that can mitigate their effects , or at least do something in this regard.
future imperfect
However, these risks are still not well understood. There are impotence and fatalism regarding these problems. People talk about apocalypses for millennia , but few who tried to prevent them. Human beings, moreover, are not as strong in that regard has not yet appeared problems ( in particular due to the impact of heuristics availability (availability heuristic) - tendency to overestimate the likelihood of those events , examples of which are familiar to us , and to underestimate the events that we can not immediately recall ) .
If humanity would cease to exist , then such loss will be at least equivalent to the loss of all living individuals , as well as the collapse of all their goals. But in fact, such a loss is likely to be much more. Disappearance of a person means a loss of meaning created by previous generations , the loss of life for all future generations ( and the number of future lives can be astronomical ) , as well as all of the values that they probably could create . If consciousness and the mind is no more, it probably means that the value itself ceases to exist in the Universe. This is a very serious moral reason to work tirelessly to prevent existential threats. And along the way , we should not make a single mistake .
With that said, I have chosen five of the most serious threats to human existence . However, one should take into account some additional factors to the extent that the list is not final .
During the last few centuries, we have discovered for themselves or have created new existential risks: in the early 1970s were discovered supervolcanoes , and to the Manhattan Project nuclear war was impossible. Therefore, we should expect to see , and other threats . In addition, some of the risks that now seem serious , may disappear as knowledge . The likelihood of certain events also changes over time . Sometimes it is because we have shown concern about the risk and eliminate their causes.
And finally , the fact that something is possible and potentially risky , does not mean that on this occasion should not show concern. There are risks that we do nothing we can do - for example, gamma-ray flux arising from the explosion of galaxies. But if we learn that we can do something, priorities change . For example, with the improvement of sanitary conditions, the advent of vaccines and antibiotics, plague came to be regarded as a manifestation of divine retribution , and as a result of the poor state of health .
1. Nuclear War
While it has been used only two atomic bombs - in Hiroshima and Nagasaki during the Second World War - and stocks nuclear weapons decreased in comparison with the peak in the Cold War period , it would be a mistake to assume that nuclear war is impossible . In fact, she apparently is not incredible .
Cuban Missile Crisis was very close to getting to turn into nuclear . If we proceed from the fact that such a crisis occurs every 69 years and there is one chance in three that it could lead to nuclear war , the possibility of this kind of accident is increased and the corresponding ratio was 1 in 200 per year.
Even worse is that the Cuban missile crisis is only the most famous case . History of the Soviet-American nuclear deterrence abounds crisis and dangerous errors. The actual probability no longer depends on international tension , however, is hardly the likelihood of a nuclear conflict would be below that of 1 to 1,000 per year .
A full-scale nuclear war between major powers would lead to the deaths of hundreds of millions of people at the moment of exchange of blows or as a result of their consequences - it would be unthinkable catastrophe . However, existential risk is still not enough .
The same should be said about the risks associated with the effects of nuclear explosions - they often exaggerated . At the local level, they are deadly , but on a global scale is a relatively limited problem. Cobalt bombs have been proposed as a hypothetical Doomsday weapon that can destroy all the resulting consequences associated with their use , but in practice they are very difficult to manufacture and expensive. And their creation , in fact, impossible.
The real threat is a nuclear winter - that is, particles of soot , which can rise into the stratosphere and thus cause long-term lowering of the temperature of the planet and dehydration . Modern climate simulations show that in such a case it would be impossible to engage in agriculture in most parts of the world for many years . In the case of such a scenario , billions of people will die of hunger, and a small number of survivors will be exposed to other threats , including disease. The main uncertainty is how will behave climb into the sky of soot particles : depending on their composition, the result may be different , and yet we have no reliable way of measuring such effects.
2 . Pandemics caused by bioengineering
Natural pandemics have killed more people than war. However, natural pandemic hardly constitute an existential threat . Usually there are people resistant to pathogens , and the descendants of the survivors become more secure. Evolution is also not particularly favorable to those parasites that kill organisms media , which is why syphilis as its distribution in Europe has evolved from a vicious killer and chronic disease.
Unfortunately , now we are able to make the disease more devastating . One of the most famous examples is the inclusion of additional gene into the mouse pox - mouse option smallpox - has made it much more dangerous to life and has the ability to infect the vaccinated individuals. Recent work on avian influenza have shown that infectious diseases can be purposefully increased.
Currently, the risk that someone would intentionally distribute any malicious infection is negligible . However, as the biotechnology improved and become cheaper , more and more groups will be able to make the disease more dangerous .
Most of the works in the field of biological weapons held states that want to get something controlled, as the destruction of mankind is not useful militarily. However, there are always people who come do some things simply because they are able to do it. Others may have loftier goals . For example, members of the religious group Aum Shinrikyo tried to hasten the apocalypse through the use of biological weapons , and not just using them more successfully carried out an attack with nerve gas . Some people believe that the situation will improve in the world , if it is no longer human beings, and so on.
Number of deaths due to the use of biological weapons and epidemic diseases makes one think that they are developing a power law - in a lot of cases the victims bit, but in a small number of cases is much victims . Given the currently available data the risk of a global pandemic resulting from bioterrorism , apparently , is small. But this applies only to bioterrorism : government using biological weapons have killed far more people than bioterrorists ( about 400 thoUSAnd people were killed in the Japanese program to conduct biological warfare during World War II) . Technologies become more efficient , and therefore the future establishment of more pathogens will be more simple.
3 . Superintelligence
Intelligence is very powerful . Slight increase in the ability to solve problems and coordinate behavior in the group is the reason why other species of monkeys were irrelevant. Currently, their existence depends on the decisions taken by the person , not what they do. Be smart - a real superiority for people and organizations , and therefore a lot of effort is directed at improving our individual and collective intelligence : from drugs to improve cognitive abilities to the development of programs related to artificial intelligence.
The problem is that intelligent systems show good results in achieving their goals , but if these goals are ill-defined , they are able to use their power to achieve a smart way to their catastrophic purposes. There is no reason to think that intelligence itself capable of making something or someone to act properly and morally. Actually there is a possibility that some types of systems superintellektualnyh will not obey the rules moral even if it were possible.
Even more disturbing is the fact that , in trying to explain some things artificial intelligence, we are confronted with deep practical and philosophical problems . Human values are vague , it is the challenges that we still can not define well , and even if we could do it, we may not be aware of all the consequences of what we are trying to create.
Software-based intelligence can very quickly eliminate the backlog of human and machine capabilities can become dangerous . The problem consists in the fact that artificial intelligence can variously relate to biological intelligence he can work faster at faster computers , parts can be disposed on more computers , some versions of it can be tested and updated in the process, and there is the possibility of introducing new algorithms capable of increased productivity .
Today, many based on the fact that a "breakthrough in the field of intelligence" is likely if the programs themselves will have sufficient capacity for the production of more sophisticated programs . If such a jump occurs, then there will be a big difference between an intelligent system (or people telling her what to do), and the rest of the world . Such a situation is likely to lead to disaster if the goals are not set correctly .
The unusual thing about superintelligence is that we do not know whether it is possible a fast and powerful breakthrough in the development of intelligence may be that our current civilization as a whole improves itself with the highest speed . However, there is good reason to believe that some technologies can move forward certain things much faster than the opportunities to improve their control of the modern society. Furthermore, we are not yet able to understand how dangerous could be some form of superintelligence and how the strategy will work to mitigate risks (mitigation strategies). It is very difficult to speculate on future technologies , which we do not have , or about the intelligence that surpasses what we have. Among these risks is it more likely to have just become either really ambitious or simply remain a mirage.
Generally , this area is not well , which is surprising . Even in the 1950s and 1960s, people were fully convinced that superintelligence might be created "within a generation" , but while security has been neglected . Maybe then people just are not serious about their predictions , and is likely to find this a long-term problem .
4 . Nanotechnology
Nanotechnology - is control of matter with atomic or molecular precision . In itself it is not dangerous and , in contrast, can be very good news for the many options for its use. The problem is that , as in the field of biotechnology , increasing possibilities also increase the potential for abuse , which are very difficult to defend against.
The big problem is not the proverbial " gray goo » (grey goo) self-replicating nanomachines , devouring everything. This will involve the creation of highly sophisticated devices. Difficult to create a machine that could reproduce itself : Biology, by definition, better cope with such tasks . Perhaps some maniacs manage to do it , but there are more affordable fruit on the tree of destructive technology.
The most obvious risk is that the current production automatically appears ideal for the manufacture of such cheap things such as weapons. In a world in which many governments would have the ability to " print " a large number of autonomous or semi-autonomous weapons systems (including the possibility of increasing their production ) , the arms race can become very intense - and as a result will increase instability, and therefore it is very tempting it may seem first strike until the moment when the enemy will get a big advantage.
weapons can also be very small in size and very accurate it may be "smart poison " that can act not only as the nerve gas , but also to choose their victims , or pervasive microrobots (gnatbots), miniature surveillance system to maintain the population in subjection - all this is possible. Furthermore , nuclear weapons and install affecting climate, can get into the wrong hands.
We can not estimate the probability of existential risk posed by the future of nanotechnology , but , apparently , they can be quite destructive simply because are able to give everything whatever we want .
5 . Mysterious unknown
The most alarming possibility is that in the world there is something deadly , but we do not know what it is.
Silence in heaven may be evidence of this . To be attributed to the absence of aliens that life and intelligence are very rare, or that intelligent life tends to be destroyed ? If there is a future Great Filter (Great Filter), then it would have to be noted and other civilizations , but even that did not help.
Whatever such a threat , it would be almost inevitable even if you were aware of its existence , no matter who you are and what you really are . We know nothing about this kind of threats ( so no effect , none of the above threats ), but they may exist.
Note that the presence of something unknown does not mean that we can not talk about it . In a remarkable article
http://arxiv.org/abs/astro-ph/0512204 Max Tegmark (Max Tegmark) and Nick Bostrom (Nick Bostrom) shows that a certain set of risks should have a coefficient of less than one in a billion figure a year, if you take the basis for the relative age of the Earth.
You may be surprised by the fact that climate change and falling meteors are not included in the list drawn up .