Scientists are scared by the threat from artificial intelligence
It is worth noting that Dr. Amnon Eden is the project manager whose main goal is to analyze the potential destructive effect of AI. Without a proper understanding of the consequences of creating artificial intelligence, its development can be in danger of catastrophe, the scientist believes. At present, our society is poorly informed about the disputes that are being conducted in scientific circles about the analysis of the potential impact of AI. "In the coming, 2016, the analysis of possible risks will have to get much more widespread in the thinking of corporations and governments, politicians and those who are responsible for making decisions," says Eden.
The scientist is sure that science fiction, which describes the destruction of mankind robots, may soon become our common problem, as the process of creating AI has spiraled out of control. For example, Elon Musk, with the support of entrepreneur Sam Altman, decided to create a new $1 billion non-profit dedicated to developing open source AI that is designed to transcend the human mind. At the same time, the American billionaire Elon Musk himself ranks artificial intelligence among the "biggest threats to our existence." Steve Wozniak, who co-founded Apple, said in March last year that “the future looks daunting and very dangerous for people… eventually the day will come when computers will think faster than us and they will get rid of slow people in order to so that companies can operate more efficiently.”
It should be noted that many scientists see a threat from the AI. Dozens of well-known scientists, investors and entrepreneurs whose activities, one way or another, are connected with the development of artificial intelligence, signed an open letter with an appeal to pay closer attention to the issue of security and public utility of works in the field of AI. Among those who signed this document are astrophysicist Stephen Hawking and the founder of Tesla and SpaceX Ilon Musk. The letter, along with the accompanying document, which was compiled by the public organization Future of Life Institute (FLI), was written in an environment of growing concern about the impact of artificial intelligence on the labor market and even on the long-term prospect of the survival of all humanity in conditions where the capabilities of robots and machines will be grow almost unrestrained.
Scientists understand the fact that the potential of AI today is very high, so it is necessary to fully explore the possibilities of its optimal use for us to avoid the accompanying traps, the letter FLI notes. It is necessary that man-made AI systems do exactly what we want from them. It is worth noting that Future of Life Institute was founded only last year by a number of enthusiasts in their field, among whom was Jaan Tallinn, the creator of Skype, in order to “minimize the risks facing humanity” and stimulate research with an “optimistic vision of the future”. First of all, we are talking about the risks that are caused by the development of AI and robotics. FLI's advisory board includes Musk and Hawking along with famous actor Morgan Freeman and other famous people. According to Ilona Mask, the uncontrolled development of artificial intelligence is a potentially greater danger than nuclear weapon.
At the end of 2015, the famous British astrophysicist Stephen Hawking tried to explain his opposition to AI technologies. In his opinion, supramental machines will eventually look at people as consumables or ants that simply interfere with the tasks they face. Communicating with users of the Reddit portal, Stephen Hawking noted that he does not believe that such super-intelligent machines will be “evil creatures” who want to destroy all of humanity because of their intellectual superiority. Most likely, it will be possible to talk about the fact that they simply will not notice humanity.
“Recently the media has constantly distorted my words. The main risk, which lies in the development of AI, is not the malice of machines, but their competence. Superintelligent artificial intelligence will do an excellent job with tasks, but if it and our goals do not coincide, humanity will have very serious problems, ”the famous scientist explains. As an example, Hawking cited a hypothetical situation in which a super-power AI is responsible for the operation or construction of a new hydroelectric dam. For such a machine, it will be paramount how much energy the entrusted system will generate, and the fate of the people will not matter. “There are few among us who trample on anthills and step on ants with anger, but let's imagine a situation - you control a powerful hydroelectric station that generates electricity. If you need to raise the water level and as a result of your actions, one anthill will be flooded, then the problems of drowning insects are unlikely to bother you. Let's not put people in the place of ants, ”said the scientist.
The second potential problem of the further development of artificial intelligence, according to Hawking, can be “tyranny of the owners of machines” - the rapid growth of the income gap between rich people who manage to monopolize the production of intelligent machines and the rest of the world's population. Stephen Hawking proposes to solve these possible problems in the following way - to slow down the process of developing AI and switch to the development of not a “universal”, but a highly specialized artificial intelligence that can solve only a very limited number of tasks.
In addition to Hawking and Mask, Frank Wilczek, Nobel Prize Winner and MIT Physics Professor, Luke Mulhauser, Executive Director of the Institute for Machine Intelligence (MIRI), as well as many experts from major IT companies: Google, Microsoft and IBM, as well as entrepreneurs , founded by the company Vicarious m DeepMind, specializing in the development of AI systems. The compilers of the letter note that they are not aiming to scare the public, but plan to emphasize both the positive and negative sides that are associated with the creation of artificial intelligence. “At present, everyone agrees that research in the field of AI is progressing with confidence, and the impact of AI on modern human society will only increase,” the letter says: the possibilities opening up to man are enormous, everything that modern civilization has to offer has been created by intellect person We are not able to predict what we can achieve if human intelligence can be multiplied with AI, but the problem of getting rid of poverty and disease is no longer infinitely difficult. ”
Numerous developments in the field of artificial intelligence, including image recognition and speech recognition systems, unmanned vehicles and much more, are already included in modern life. According to Silicon Valley observers, more than 150 startups are being implemented in this area today. At the same time, developments in this area are attracting more and more investment investments and more and more companies like Google are developing their projects based on AI. Therefore, the authors of the letter believe that the time has come to pay increased attention to all possible consequences of the observed boom for the economic, social and legal aspects of human life.
Nick Bostrom is a professor at the University of Oxford, who is known for his work on the anthropic principle. This specialist believes that the AI has come to the point, followed by its incompatibility with the person. Nick Bostrom emphasizes that, in contrast to genetic engineering and climate change, for the control of which governments allocate sufficient amounts, “nothing is done to control the evolution of AI”. According to the professor, in relation to artificial intelligence, there is currently a “policy of legal vacuum that needs to be filled.” Even technologies such as unmanned vehicles, which seem harmless and useful, raise a number of questions. For example, should a similar car do emergency braking in order to save its passengers and who will be responsible in the event of an accident made by an unmanned vehicle?
Arguing over the potential risks, Nick Bostrom noted that "the computer is not able to determine the benefits and harm to humans" and "has not even the slightest idea of human morality." In addition, the cycles of self-improvement in computers can occur at such a speed that a person simply cannot follow, and almost nothing can be done with this either, says the scientist. "At that stage of development, when computers can think for themselves, no one can accurately predict whether this will lead to chaos or significantly improve our world," Nick Bostrom said, citing as an example the simplest possible computer solution - turn off in countries with cold climate heating for the recovery of people and increase their endurance, which "may come to mind artificial intelligence."
In addition, Bostrom also raises the problem of chipping the human brain in order to increase our biointelligence. “In many ways, such a procedure can be useful if all processes are controlled, but what happens if the implanted chip can reprogram itself? What consequences can this lead to - the emergence of a superman or the emergence of a computer that will only look like a man? ”The professor asks. The ways in which computers solve human problems are very different from ours. For example, in chess, the human brain considers only a narrow set of moves, choosing the best option from them. In turn, the computer considers all possible moves, choosing the best of all. In this case, the computer does not expect to upset or surprise his opponent in the game. Unlike a person playing chess, a computer can make a cunning and subtle move only by chance. Artificial intelligence can be considered the best way - to eliminate the error from any system due to the removal of the "human factor", but, unlike a human, the robot is not ready to perform feats that would save people's lives.
In addition, the growth in the number of smart machines is a stage in the new industrial revolution. In turn, this means that in the near future, humanity will face inevitable social changes. Work over time will be the lot of highly qualified specialists, since almost all simple tasks will be able to take on robots and other mechanisms. Scientists believe that artificial intelligence “needs an eye and an eye,” so that our planet does not turn into a cartoon planet “Zhelezyaka”, which was inhabited by robots.
In terms of more and more automation of production processes, the future has already arrived. The World Economic Forum (WEF) presented its report, according to which automation will lead to the fact that even before 2020, more than 5, millions of people working in various fields, will lose their place of work. This is the impact of robots and robotic systems on our lives. To compile the report, WEF staff used data on 13,5 million workers from around the world. According to their data, by the year 2020 the total need for more than 7 millions of jobs will disappear, while the expected employment growth in other industries will be just a little more than 2 millions of jobs.
Information sources:
http://www.ridus.ru/news/209869
http://www.vedomosti.ru/technology/articles/2015/01/13/ugrozy-iskusstvennogo-razuma
https://nplus1.ru/news/2016/01/19/they-took-our-jobs
http://ru.sputnik.kg/world/20151013/1019227239.html
Information