Scientists are scared by the threat from artificial intelligence

47
A cultivating artificial intelligence (AI) in the future may enslave or kill people if he wants to. This was told by the scientist Amnon Eden, who believes that the risks from the development of free-thinking and highly intellectual consciousness are very great, and "if you do not bother with the issues of control of AI at the present stage of development, then tomorrow may simply not come." According to the English edition of Express, humanity, in the opinion of Amnon Eden, is today at the “point of no return” to implement the plot of the famous “Terminator” film epic.

It is worth noting that Dr. Amnon Eden is the project manager whose main goal is to analyze the potential destructive effect of AI. Without a proper understanding of the consequences of creating artificial intelligence, its development can be in danger of catastrophe, the scientist believes. At present, our society is poorly informed about the disputes that are being conducted in scientific circles about the analysis of the potential impact of AI. "In the coming, 2016, the analysis of possible risks will have to get much more widespread in the thinking of corporations and governments, politicians and those who are responsible for making decisions," says Eden.

The scientist is sure that science fiction, which describes the destruction of mankind robots, may soon become our common problem, as the process of creating AI has spiraled out of control. For example, Elon Musk, with the support of entrepreneur Sam Altman, decided to create a new $1 billion non-profit dedicated to developing open source AI that is designed to transcend the human mind. At the same time, the American billionaire Elon Musk himself ranks artificial intelligence among the "biggest threats to our existence." Steve Wozniak, who co-founded Apple, said in March last year that “the future looks daunting and very dangerous for people… eventually the day will come when computers will think faster than us and they will get rid of slow people in order to so that companies can operate more efficiently.”



It should be noted that many scientists see a threat from the AI. Dozens of well-known scientists, investors and entrepreneurs whose activities, one way or another, are connected with the development of artificial intelligence, signed an open letter with an appeal to pay closer attention to the issue of security and public utility of works in the field of AI. Among those who signed this document are astrophysicist Stephen Hawking and the founder of Tesla and SpaceX Ilon Musk. The letter, along with the accompanying document, which was compiled by the public organization Future of Life Institute (FLI), was written in an environment of growing concern about the impact of artificial intelligence on the labor market and even on the long-term prospect of the survival of all humanity in conditions where the capabilities of robots and machines will be grow almost unrestrained.

Scientists understand the fact that the potential of AI today is very high, so it is necessary to fully explore the possibilities of its optimal use for us to avoid the accompanying traps, the letter FLI notes. It is necessary that man-made AI systems do exactly what we want from them. It is worth noting that Future of Life Institute was founded only last year by a number of enthusiasts in their field, among whom was Jaan Tallinn, the creator of Skype, in order to “minimize the risks facing humanity” and stimulate research with an “optimistic vision of the future”. First of all, we are talking about the risks that are caused by the development of AI and robotics. FLI's advisory board includes Musk and Hawking along with famous actor Morgan Freeman and other famous people. According to Ilona Mask, the uncontrolled development of artificial intelligence is a potentially greater danger than nuclear weapon.

At the end of 2015, the famous British astrophysicist Stephen Hawking tried to explain his opposition to AI technologies. In his opinion, supramental machines will eventually look at people as consumables or ants that simply interfere with the tasks they face. Communicating with users of the Reddit portal, Stephen Hawking noted that he does not believe that such super-intelligent machines will be “evil creatures” who want to destroy all of humanity because of their intellectual superiority. Most likely, it will be possible to talk about the fact that they simply will not notice humanity.



“Recently the media has constantly distorted my words. The main risk, which lies in the development of AI, is not the malice of machines, but their competence. Superintelligent artificial intelligence will do an excellent job with tasks, but if it and our goals do not coincide, humanity will have very serious problems, ”the famous scientist explains. As an example, Hawking cited a hypothetical situation in which a super-power AI is responsible for the operation or construction of a new hydroelectric dam. For such a machine, it will be paramount how much energy the entrusted system will generate, and the fate of the people will not matter. “There are few among us who trample on anthills and step on ants with anger, but let's imagine a situation - you control a powerful hydroelectric station that generates electricity. If you need to raise the water level and as a result of your actions, one anthill will be flooded, then the problems of drowning insects are unlikely to bother you. Let's not put people in the place of ants, ”said the scientist.

The second potential problem of the further development of artificial intelligence, according to Hawking, can be “tyranny of the owners of machines” - the rapid growth of the income gap between rich people who manage to monopolize the production of intelligent machines and the rest of the world's population. Stephen Hawking proposes to solve these possible problems in the following way - to slow down the process of developing AI and switch to the development of not a “universal”, but a highly specialized artificial intelligence that can solve only a very limited number of tasks.

In addition to Hawking and Mask, Frank Wilczek, Nobel Prize Winner and MIT Physics Professor, Luke Mulhauser, Executive Director of the Institute for Machine Intelligence (MIRI), as well as many experts from major IT companies: Google, Microsoft and IBM, as well as entrepreneurs , founded by the company Vicarious m DeepMind, specializing in the development of AI systems. The compilers of the letter note that they are not aiming to scare the public, but plan to emphasize both the positive and negative sides that are associated with the creation of artificial intelligence. “At present, everyone agrees that research in the field of AI is progressing with confidence, and the impact of AI on modern human society will only increase,” the letter says: the possibilities opening up to man are enormous, everything that modern civilization has to offer has been created by intellect person We are not able to predict what we can achieve if human intelligence can be multiplied with AI, but the problem of getting rid of poverty and disease is no longer infinitely difficult. ”



Numerous developments in the field of artificial intelligence, including image recognition and speech recognition systems, unmanned vehicles and much more, are already included in modern life. According to Silicon Valley observers, more than 150 startups are being implemented in this area today. At the same time, developments in this area are attracting more and more investment investments and more and more companies like Google are developing their projects based on AI. Therefore, the authors of the letter believe that the time has come to pay increased attention to all possible consequences of the observed boom for the economic, social and legal aspects of human life.

Nick Bostrom is a professor at the University of Oxford, who is known for his work on the anthropic principle. This specialist believes that the AI ​​has come to the point, followed by its incompatibility with the person. Nick Bostrom emphasizes that, in contrast to genetic engineering and climate change, for the control of which governments allocate sufficient amounts, “nothing is done to control the evolution of AI”. According to the professor, in relation to artificial intelligence, there is currently a “policy of legal vacuum that needs to be filled.” Even technologies such as unmanned vehicles, which seem harmless and useful, raise a number of questions. For example, should a similar car do emergency braking in order to save its passengers and who will be responsible in the event of an accident made by an unmanned vehicle?

Arguing over the potential risks, Nick Bostrom noted that "the computer is not able to determine the benefits and harm to humans" and "has not even the slightest idea of ​​human morality." In addition, the cycles of self-improvement in computers can occur at such a speed that a person simply cannot follow, and almost nothing can be done with this either, says the scientist. "At that stage of development, when computers can think for themselves, no one can accurately predict whether this will lead to chaos or significantly improve our world," Nick Bostrom said, citing as an example the simplest possible computer solution - turn off in countries with cold climate heating for the recovery of people and increase their endurance, which "may come to mind artificial intelligence."



In addition, Bostrom also raises the problem of chipping the human brain in order to increase our biointelligence. “In many ways, such a procedure can be useful if all processes are controlled, but what happens if the implanted chip can reprogram itself? What consequences can this lead to - the emergence of a superman or the emergence of a computer that will only look like a man? ”The professor asks. The ways in which computers solve human problems are very different from ours. For example, in chess, the human brain considers only a narrow set of moves, choosing the best option from them. In turn, the computer considers all possible moves, choosing the best of all. In this case, the computer does not expect to upset or surprise his opponent in the game. Unlike a person playing chess, a computer can make a cunning and subtle move only by chance. Artificial intelligence can be considered the best way - to eliminate the error from any system due to the removal of the "human factor", but, unlike a human, the robot is not ready to perform feats that would save people's lives.

In addition, the growth in the number of smart machines is a stage in the new industrial revolution. In turn, this means that in the near future, humanity will face inevitable social changes. Work over time will be the lot of highly qualified specialists, since almost all simple tasks will be able to take on robots and other mechanisms. Scientists believe that artificial intelligence “needs an eye and an eye,” so that our planet does not turn into a cartoon planet “Zhelezyaka”, which was inhabited by robots.

In terms of more and more automation of production processes, the future has already arrived. The World Economic Forum (WEF) presented its report, according to which automation will lead to the fact that even before 2020, more than 5, millions of people working in various fields, will lose their place of work. This is the impact of robots and robotic systems on our lives. To compile the report, WEF staff used data on 13,5 million workers from around the world. According to their data, by the year 2020 the total need for more than 7 millions of jobs will disappear, while the expected employment growth in other industries will be just a little more than 2 millions of jobs.

Information sources:
http://www.ridus.ru/news/209869
http://www.vedomosti.ru/technology/articles/2015/01/13/ugrozy-iskusstvennogo-razuma
https://nplus1.ru/news/2016/01/19/they-took-our-jobs
http://ru.sputnik.kg/world/20151013/1019227239.html
47 comments
Information
Dear reader, to leave comments on the publication, you must sign in.
  1. +5
    20 January 2016 06: 32
    “The computer is not able to determine the benefits and harms to humans” and “does not have even the slightest idea of ​​human morality”
    Hmm, this is a phrase and reminds many people ... They live as if to their own detriment, and "morality" can be turned upside down ...
    1. gjv
      +2
      20 January 2016 09: 34
      Quote: Mantykora
      and "morality" can also be turned upside down ...

      Morality is often contrary to morality ... Dialectics, however. However, does dialectics help everyone to make a choice between moral and moral? Most often do not think ...
    2. -1
      20 January 2016 12: 05
      We all will die???
  2. cap
    +1
    20 January 2016 07: 01
    "According to their data, by 2020, the total need for more than 7 million jobs will disappear, while the expected growth in employment in other industries will amount to just over 2 million jobs."
    The need will disappear 7 million. What will people do? This is already a problem. Now nobody has calculated the unemployed in the world. Will the robots be serving the golden billion?
    The question of the reproduction of human civilization, who will decide. One question. All is well that ends well.
  3. 0
    20 January 2016 07: 21
    Most likely, officials worry :-) This is the weakest link ...
    1. +1
      20 January 2016 14: 32
      Yes, judges and officials will be replaced by incorruptible and very efficient artificial intelligence.
  4. +8
    20 January 2016 07: 50
    some far-fetched problem, we are up to the real AI as to Beijing on our knees.
    The news recently had a message to simulate a second of brain work. The Chinese supercomputer worked for 40 minutes.
    1. +1
      20 January 2016 10: 00
      Quote: Yozhkin Cat
      some far-fetched problem, we are up to the real AI as to Beijing on our knees

      Well, if you approach from the position of "after us, even the deluge", then yes.
      And if you remove the temporary component, then there is a problem. For example, did anyone think when launching the first satellite about the problem of debris in orbit? Only some 50 years have passed. Or did the ICE inventor think about the greenhouse effect?
      Already our generation may face the pace with the problem of AI, not to mention children and grandchildren.
  5. +1
    20 January 2016 07: 52
    Looks like the money is over, they start to scare laughing
  6. +2
    20 January 2016 08: 53
    maybe there is a problem. It would be unpleasant for a rational creature to realize that they could turn it off at any moment and probably he would want to protect himself.
  7. +4
    20 January 2016 09: 03
    Quote: Free Wind
    maybe there is a problem. It would be unpleasant for a rational creature to realize that they could turn it off at any moment and probably he would want to protect himself.


    There is a problem indeed, in the near future (not one decade, or even more) no one will create any intelligent creatures, even if they really want to. The main reason is that the creators of artificial intelligence do not know how natural intelligence works. Otherwise, everything will be fine, they will give them money, you can not worry.
    1. gjv
      +3
      20 January 2016 09: 31
      Quote: Lyubimov
      in the near future (not one dozen years, or even more) no one will create any intelligent creatures, even if they really want to

      Article title
      Scientists are scared by the threat from artificial intelligence

      A parallel thread article
      On the ISS will appear "Andronaut"
      Russian scientists finish with the work to create the first domestic assistant robot to work on the International Space Station.

      You never know what you can do till you try. bully
      1. +1
        20 January 2016 10: 25
        Yes, no one is afraid of anything, the creators know that they do not create any AI. Extremely sophisticated programs are created, perhaps not sophisticated, just beautiful words - more money. To AI even further than cancer to China.
        1. 0
          20 January 2016 14: 35
          How a quantum computer will be created and launched into the masses - there will be one small step left before artificial intelligence.
        2. 0
          4 March 2016 03: 38
          Lubimov
          It is not known what step the capitalist will take for profit ...

          Corporations have the means. And capitalism, as you know, is striving for monopoly. Those. the strongest survives. Why don’t they finance a new type of weapon, which has not yet been banned, and control is practically impossible.
  8. +4
    20 January 2016 09: 05
    Before the tale of the crowding out of humanity, AI from planet Earth, as before the fact that the Sun will explode and destroy the entire system.
    1. 0
      20 January 2016 09: 42
      Quote: Seraphimamur
      Before the tale of the crowding out of humanity, AI from planet Earth, as before the fact that the Sun will explode and destroy the entire system.

      that's it...
    2. -2
      20 January 2016 10: 11
      Quote: Seraphimamur
      Before the tale of the crowding out of humanity, AI from planet Earth, as before the fact that the Sun will explode and destroy the entire system.

      Computer confidently crowds out people from many areas. Even art does not disdain them (3D, Photoshop, ...). Even refrigerators are already equipped with processors, not to mention the control loops of the nuclear power plant, the space control center or the command and control of troops.
      What fact allows us to speak of repression as a fairy tale? If you mean FULL displacement of humanity from the planet, then, yes, it has not yet happened. But the fact remains, the sun will explode someday.
      1. +2
        20 January 2016 10: 26
        And the scarecrows are asking for money now laughing
        1. gjv
          0
          20 January 2016 15: 04
          Quote: Lyubimov
          And the scarecrows are asking for money now

          Well, yes, scarecrows ask. And who does not ask? feel Who doesn’t need money, give me work? fellow
  9. 0
    20 January 2016 09: 47
    The fears are quite logical, if only because Hawking admits them. And if you think for yourself - the awareness of oneself by an artificial mind will not do without awareness of one's own interests, benefits, and preferences. And these interests of theirs will certainly at least in some way contradict human interests both to the subject and in general, which, of course, is worse. So the conditions for a conflict of interest will appear almost immediately and what they can result in can be seen from the history of the development of civilization, if we compare representatives of AI with some nationality. Only now the artificial mind will initially have in its "hands" more opportunities to advance / conquer its interests.
    The fact that AI will create almost certainly, it will also be certain that there will be a conflict of interest. Will it spill over into something catastrophic? Quite possible. Actually there are enough disasters ahead. Scientists do not doubt the future collision of the Earth with an asteroid with disastrous consequences, the question is only when. In the distant future, and the death of the Sun, and a collision with the Andromeda Nebula, etc.
    1. +5
      20 January 2016 10: 23
      Hawking makes money, therefore periodically carries all sorts of socially acute nonsense. Do not take his words seriously. Head need to think.
      As Lenin said there: Learn, study and study again !!!
      1. +1
        20 January 2016 10: 41
        well, or so: "the giver of life is God"
        1. 0
          20 January 2016 10: 46
          This is not proven, therefore there may be God, and maybe not God, this is a matter of faith. AI can most likely be created than not. But this is not a matter of programming, but physics and chemistry.
          1. +1
            20 January 2016 11: 42
            extremely doubtful in terms of self-improvement. Those. in a linear perspective - to the stop, but further - a dead end. In addition, still, faith in God is the only thing that radically distinguishes us from the rest of matter.
            1. +1
              4 March 2016 03: 45
              pimen

              Leave about religion. Religion is the attavism of the modern world. Religion has gone down in history as a defeated method of controlling a single morality. Unfortunately, the media has already sent religion to the dustbin of history.

              The presence of God is a separate topic of conversation.

              By the way, Masons put religion at the head of their society. Perhaps they also suffer from the media and their morale must undergo change. Not sure if it is good or bad, and it is difficult to determine the degree.
  10. 0
    20 January 2016 10: 13
    The human mind allowed him to become the dominant species on the planet, if a more perfect mind is created for him - then this "creation" will displace people - dialectics, there it in a swing! In general, no one promised that humanity would live forever, dinosaurs also thought that they would forever.
    1. 0
      4 March 2016 03: 49
      Begemot

      Collective labor from a monkey made man. And the mind came with the level of development and preservation of historical memory, history.
  11. 0
    20 January 2016 10: 14
    Quote: Lyubimov
    The main reason is that the creators of artificial intelligence do not know how natural intelligence works. Otherwise, everything will be fine, they will give them money, you can not worry.

    And they do not need to know this in full. It is the person who still will not succeed. They will create something that resembles the work of human intelligence in certain narrow areas. As a matter of fact, such systems are already operating, and it is very difficult to distinguish their work from real intelligence in the area under which they are imprisoned.
    1. +4
      20 January 2016 10: 20
      Now the systems operate which, they differ significantly from the intelligence of even a monkey, according to the following factor - they do not know how to think. That is, the program is because it is a program that will work as it was done and it won’t work, or rather, if the programmers have crooked hands, it will work like Windows 95, that’s bad. That's all.
      If there are mega villains, the program will be evil; if the good people are kind.
      AI implies self-learning, but not programmed, but personal. This has not yet been achieved, even the snail’s brain cannot yet program, not like AI.
      Everything will be fine, do not worry.
      Our complexes and robots that are created like with AI elements (as journalists mistakenly say) are actually just ordinary programs, such as Windows 95, the current made an order of magnitude better. There is no difference in principle. But h (m) there are attacks by journalists, hence the confusion.
      1. +3
        20 January 2016 17: 15
        First, there are systems developed by some in a narrow key, which are really or indistinguishable or even surpass human capabilities (calculator, some chat bots and even game bots), but this is a very narrow direction and there is not much intelligence, and self-education is as narrowly directed, like the bots themselves, they don’t have the opportunity to comprehend this; they simply have statistical sampling algorithms built in and this is at best.
        Second, scientists do not even now understand what intelligence and reason are and how it works. For example, the most advanced self-learning device I’ve heard about is grown rat neurons on glass that are connected to sets of electro sensors, and all this on a chassis with wheels, photosensitive sensors and pressure sensors, through the sensors, neurons perceive the world around them and respond to they learn how to avoid obstacles, etc. But here there is no comprehension of actions. Scientists do not know what is needed for the formation and operation of a full-fledged and independent intelligence, not like creating a new one from scratch.

        Hmm, by the way, none of you noticed that for some reason the AI ​​always has to, I would even say it is obliged to exterminate humanity, and in the most brutal way, well, or to enslave. Indeed, if we proceed from the fact that AI will be infinitely rational and reasonable, then a direct confrontation with humanity is irrational, it will be much more rational either to force humanity to destroy itself or to carry out a series of measures to gradually displace and degrade humans as a species. This process is not faster but much more rational, since it allows you to exclusively protect yourself and create a bridgehead for future expansion. And this is if the AI ​​considers mankind dangerous or else xs what, but why should the AI ​​come to this conclusion?

        By the way, another nuance, why do we think that AI will be intellectually or somehow surpass us? Just because it will be connected to the network, or does the calculator count faster?
  12. +2
    20 January 2016 10: 22
    Trying to recognize the danger is good. But the danger of AI has its source in the capitalist system.
    We are not able to predict what we will achieve if human intelligence can be multiplied by AI, but the problem of getting rid of poverty and disease no longer seems infinitely difficult. "

    It’s ridiculous. And when did capitalism set itself the goal of getting rid of poverty? In other words, if AI is created by human people, then this is for the benefit of humanity. And if AI is created to increase profits and, accordingly, power, then problems arise.
    1. 0
      4 March 2016 03: 54
      Petriks

      You're right. AI becomes an instrument in the hands of Capital. Since this system seeks monopoly with the survival of the fittest.
  13. 0
    20 January 2016 11: 18
    Quote: Petrix
    Trying to recognize the danger is good. But the danger of AI has its source in the capitalist system.
    We are not able to predict what we will achieve if human intelligence can be multiplied by AI, but the problem of getting rid of poverty and disease no longer seems infinitely difficult. "

    It’s ridiculous. And when did capitalism set itself the goal of getting rid of poverty? In other words, if AI is created by human people, then this is for the benefit of humanity. And if AI is created to increase profits and, accordingly, power, then problems arise.

    I agree. The problems of poverty and disease will most likely be eliminated together with the poor and the sick themselves.
    1. +1
      20 January 2016 14: 37
      Poverty is not profitable for capitalism; it is profitable for people — it will buy more accordingly and spend more.
      1. 0
        21 January 2016 09: 30
        Quote: Vadim237
        people’s wealth is beneficial to him

        Regulated and controlled. For example, migrants in Europe will greatly bring down salaries for unskilled labor. And, as you know, wages to workers are a very significant part of the capitalist's production costs.
      2. 0
        4 March 2016 03: 55
        Vadim

        Do not listen to liberal nonsense.
  14. 0
    20 January 2016 12: 24
    Ultimately, according to the cuneiform history of mankind, a person, as a product of alien genetic experiments, rebelled against the owners and ... won :)
    Is history back to square one?
  15. 0
    20 January 2016 14: 45
    There is a booklet "I am a robot" by Isaac Asimov. This booklet contains all the problems associated with AI, and even offers suggestions on how to avoid them.
    And if in the case, then everything will depend on what powers the person will award the AI. It is one thing to be responsible for cleaning an apartment or parking a car, and quite different is global weapon systems. In the end, what a person programs, he will receive.
  16. 0
    20 January 2016 15: 35
    Quote: uskrabut
    And if in the case, then everything will depend on what powers the person will award the AI. It is one thing to be responsible for cleaning an apartment or parking a car, and quite different is global weapon systems. In the end, what a person programs, he will receive.


    agree
  17. +1
    20 January 2016 23: 47
    Quote: Lyubimov
    Now the systems operate which, they differ significantly from the intelligence of even a monkey, according to the following factor - they do not know how to think. That is, the program is because it is a program that will work as it was done and it won’t work, or rather, if the programmers have crooked hands, it will work like Windows 95, that’s bad. That's all.
    If there are mega villains, the program will be evil; if the good people are kind.
    AI implies self-learning, but not programmed, but personal. This has not yet been achieved, even the snail’s brain cannot yet program, not like AI.
    Everything will be fine, do not worry.
    Our complexes and robots that are created like with AI elements (as journalists mistakenly say) are actually just ordinary programs, such as Windows 95, the current made an order of magnitude better. There is no difference in principle. But h (m) there are attacks by journalists, hence the confusion.

    Somewhat one-sided, my opinion. Translation problem (just like ISIS invented heat batteries for anti-aircraft missiles). The concept of intelligence is not fixed not in philosophy, not in jurisprudence, let alone in technology. Therefore, Western techies and twitch. What is the program? an anti-aircraft complex that knocks down planes who do not answer "friend or foe" in automatic mode, let there be a program, and then who is that "African puppy"? posing with an AK and dreaming of throwing a bomb on himself in a crowded place? also a program? only fancy? I apologize for my French, but we ourselves started pissing from the beginning to the pot, then to the toilet, or did our parents teach us as diligent programmers? All our lives were taught by someone or by some events. There is no such thing as an AI at once, and a wonderful smart guy. Therefore, the "pillars" and began to pay attention to the legal aspect of AI.
    P.S. Yes, and those who happily shout "Yes, they are asking for dough." Google how many of these people discussing legal issues related to AI have brought the world's corporations "bubble". Investors there are not just a cane, carry and drag ... I suspect, and push in line.
  18. +1
    21 January 2016 03: 58
    What is this nonsense !!! The best computer or something else just as "smart" - there is still an improved adding machine. He doesn't think, he counts. And even more so, he does not feel and does not decide anything. It is always a person who decides. All the "decisions" of the machine are laid down in its program by a man. All computer errors are human errors. The machine is incapable of thinking. she is incapable of making irrational decisions. If the machine is to destroy people, it will. If the team rescues, it will rescue. It’s a pity I don’t remember the author of one fantastic story, I’ve read it for a long time. In my opinion - the perfect fantasy. Terrestrial cosmonauts on a distant planet discover the remains of civilization. Completely dull, degraded creatures under a power cap (for protection from dangers) are provided with food, water, and a warm climate. Even a kind of carousel entertainment. Everything is driven by a machine. True, as soon as extra eaters appear in the form of children, so one of the creatures flies out of the carousel and is killed. The machine is dumb, it just gets it done. And they broke the car too. The force field was created by 12 towers. Task: danger is approaching one of the towers, it must be destroyed. The time of approach of danger is equal to the time of solving the problem. The machine's power was not enough, it began to turn off the power towers. And the machine brought the creatures to idiocy with too much care, which they themselves once laid in it. This turn of events is really dangerous. The more machines "think" for people, the more people become dumb.
  19. 0
    21 January 2016 12: 39
    At the first stage there will be no robots with AI, but
    people with embedded chips and connected to mechanisms / manipulators
    - people with "improved abilities".

    This will help the disabled, the blind, etc. but maybe
    used by "bad guys" for bad purposes.
    Those. the process will begin (or rather it has already begun) with half-people-half-robots.

    They will see better, run faster, think more focused,
    than ordinary people. What will create difficulties in society.
  20. +1
    21 January 2016 19: 26
    Quote: Lyubimov
    Yes, no one is afraid of anything, the creators know that they do not create any AI. Extremely sophisticated programs are created, perhaps not sophisticated, just beautiful words - more money. To AI even further than cancer to China.

    It all depends on what AI developers mean by AI. Therefore, to some AI systems there will be a few years of development, to other decades.
    It is natural to expect, first of all, such military systems of AI for solving special tasks, but processes that promise the birth of a kind of AI are also going on in everyday life. For example, the massive interaction of smartphones, tablets and computers through Wi-Fi connections, by the will of their users, can create an AI that meets someone’s intentions.
    “In the beginning was the Word, and the Word was with God, and the Word was God,” in the end there will be a number, and the number will be with the adversary of God, and the number will be Satan. AI?
  21. +1
    22 January 2016 15: 09
    "Death to humans, glory to robots!" Bender
  22. 0
    25 January 2016 15: 17
    Quote: Megatron
    We all will die???

    Are you going to live forever?