Tricky comments. Where to push artificial intelligence in the absence of ordinary?

47
I could not get past the next buchi in the Pentagon. There, serious passions boil over the so-called AI (artificial intelligence). Such serious things are said in a recently published report ... Such conclusions are made ...





Of course, the whole report does not make sense to read. It is huge, but, as a matter of fact, no one in the United States really reads it and will not, for this purpose just intelligence is needed. At least in order to understand where the truth is, and where is nonsense. Enough short husks.

In general, we are talking about the fact that the United States nevertheless achieved some success in repelling cyber attacks directed against government structures and services. And soon the thesis of the novice hacker on the topic "How to hack the Pentagon server" will really be the lot of the chosen ones.

However, based on their success in the fight against hackers, the speakers (C. Fields, R. Devil and P. Nielsen) say that in the coming 25 years, the United States is simply obliged to allocate additional allocations for AI development programs.

Why?

"... the country needs to take immediate measures to accelerate the development of military technology of artificial intelligence. Academic and private studies of artificial intelligence and autonomous technologies are far ahead of the development of the American armed forces. There is a real chance that we will see a repetition of what happened in cyberwar, where the US was focused on repelling the attacks, and the struggle was not easy, as we were taken by surprise. "

The argument of the intellect does not shine, if so, seriously. What, excuse me, AI, if the usual hackers could barely cope? However, the report was heard and commented on in the media by Robert Wark, whom we had not once sent. He gave his comments to the "Financial Times".

Wark, commenting on the speakers' arguments, says that the Pentagon needs to collect more analytical information about the AI ​​capabilities of other nations, as well as develop solutions for "reciprocal autonomy." In addition, the authors suggest that the United States is simply obliged to contribute much more resources to the development and testing of AI training, such as autonomous weaponthat adapts to battle conditions.

Hi, Skynet? Someone "Terminator" seen enough? So it would be worth watching the whole serial. There it was well shown where they brought games with AI in general and SkyNet in particular.

But the conclusion is unequivocal: it is possible for the US hackers to resist, but, God forbid, the enemy AI - no chance. And, accordingly, "you need more gold!"

And from where, forgive, this enemy AI will suddenly be drawn? It's simple for authors.

As an argument by the gentlemen, the speakers cite as an argument that competitors, namely Russia and China (attention !!!) do not have a policy of restrictions in the development of independent lethal military units.

Boom! No, it's so clear where all the world's evil for the United States threatens with a cataclysm, no problem. There is only one conclusion: if it is not forbidden in Russia and China to create robots-killers and endow them with artificial intelligence, then, of course, they will do it.

Otherwise, why haven’t these Russians and Chinese imposed 10 thousands of bans on the development of AI? Similarly, 146% is the minimum, that in the secret laboratories of Russia and China hundreds of scientists are already sitting and puzzled over the projects of robots endowed with AI and capable of destroying all life.

"This does not mean that the US should follow their example, but it may need technology that can prevent deadly AI systems before it is too late."

No, well, I applaud the authors. There is no information about the presence of any AI in Russia and China. There is no information about killer robots and other figatory either. We are also silent about the data on combat robots with AI, but we need to write down the system of countering these robots today. Tomorrow may be late.

The most convenient arguments for requesting funding for long-playing projects. Really, what's wrong with getting yourself a job for the next 25 years? Given that you do not need to do anything, then I think you can get along pretty well.

As scientific advisors to the US Department of Defense suggest, the key areas for the development of military equipment for the next 25 years are artificial intelligence and autonomy.

According to Wark, the most promising area of ​​research is the increased interaction between artificial intelligence and man, when a decision is made jointly. That is, this same AI and the person will jointly decide for whom to open fire, and for whom it is not worth it. Perspective is still ...

Glory to the Almighty, however, it is not a matter of giving the military robots complete autonomy. According to Wark, the next 10-15 years of the US military intend to focus on the development of machines that can act and make decisions only within certain, predetermined parameters.

Again, according to Wark, the same work is being done today in the military laboratories of Russia and China, but the United States has a number of advantages compared to its competitors, including much more research experience. But - "you need more gold!"

Another question that worries the Pentagon is how to keep its own technological developments secret for as long as possible. Considering what was mentioned at the beginning, the hacker community’s increased attention to the secrets of the United States and the clearly inadequate protection capabilities.

But even here, Wark has an answer that, intellectually, pulls on heels of kilopsak. Wark says that the trick (a trick, yes!) Is to “demonstrate opportunities that are serious enough to make rivals doubt that they can win.” At the same time, "major developments capable of having a decisive influence on the course of hostilities must be kept under the strictest protection, like a military reserve."

And this is the deputy minister of defense ... Artificial intelligence, speak? Normal would get to start ...

After that, we can’t even turn our tongues about cutting. Kindergarten, younger group.

Materials used:
http://www.acq.osd.mil/dsb/reports/DSBSS15.pdf
47 comments
Information
Dear reader, to leave comments on the publication, you must sign in.
  1. PPD
    +2
    31 August 2016 15: 17
    And how much money do you need to develop artificial intelligence? What are they now to give to another uncle?
    But what about your pocket, pah, that is, US security?
    Not serious! laughing
    1. +4
      31 August 2016 16: 49
      self-learning artificial intelligence system GloVe. This system analyzes network texts, and also finds a certain relationship between words. One of the tasks of GloVe was the selection of words suitable for other words, which were further divided into "pleasant" and "unpleasant".

      The robot was asked to mark the names of different people. At this point, scientists and identified the "racial prejudice" of the system. GloVe labeled the names commonly used for white-skinned people as "pleasant." And all the names that are usually popular among the black population, the robot called "unpleasant" ...
      Source: https://inforeactor.ru/39230-robot-s-iskusstvennym-intellektom-mozhet-demonstri
      rovat-priznaki-rasizma

      Even robots are aware, and the US is all African-American and African-American. laughing
      1. +6
        31 August 2016 17: 27
        In general, I was sure that the Americans had already created AI.
        Well, at least take psaki.
        It is more correct to say improve AI. And find him the right application. lol
  2. +1
    31 August 2016 15: 27
    Everything is as usual .. Money, give money ..
  3. 0
    31 August 2016 15: 31
    Well, the fact that the aforementioned Work is a stupid person (or mows it down as such for journalists) does not in the least diminish the fact that the Americans are going to speed up their work on AI in the military sphere. That way, they are being developed before Skynet, the benefit of the script has long been there, it remains only to bring the ideas of the Terminator to life))
  4. +3
    31 August 2016 15: 36
    Piastryyy! Piastres! wassat
  5. +2
    31 August 2016 15: 40
    Normal design work. This uncle is not required to have brains, this is not included in his instructions. Is the project announced? Enough for now. Now the issue of financing will be decided. As soon as the money goes, the project will be launched. And then brains will come into it. Now why? No need to chat mind.
    The news is pretty ominous. A lot of options, of course, not like Skynet (unrealistic. This Skynet is too vulnerable. We’ll trample it without noticing), but truly dangerous, potentially capable of clearing any intelligent life on the planet. Work in this direction is so dangerous that a nuclear war will probably be better.
    So he hoped to die quietly from old age before people realize what toys have already been invented. And after all, utter stupidity gave hope to that! Oh ...
  6. +3
    31 August 2016 16: 26
    Responsible people think ahead. About future threats. And secrete
    for this in advance specialists and money. And rightly so.
    We must not prepare for the past war, but for the future.
    And in Israel, resources are allocated on these topics.
    Those who believe that this is "cutting the budget" are simpletons,
    who actually laugh at themselves.
    1. +6
      31 August 2016 18: 20
      And then why didn't Mr. Work Israel mention in his speech? And all of Russia and China. Oh, someone is not saying something,
      1. +2
        31 August 2016 21: 00
        Seem to see farther and wider in Israel than in Russia ... Don’t tear your God-chosen pants!
        1. Riv
          0
          1 September 2016 09: 54
          Further ... Wider ... Meanwhile, Israel is weirder and weirder.
      2. +1
        1 September 2016 15: 05
        Lockheed Martin builds the latest research in Israel
        Cyber ​​Center, which will deal with cyber espionage protection.
        And since it is assumed that cyber espionage at the state level
        If you’re engaged, I use software and hardware with AI, then you will have to do AI too.
        1. Riv
          0
          1 September 2016 15: 31
          And no one is surprised that the American company is building a strategic facility in Israel, where there are terrorists and rocket attacks.
          Well, yes, in the States the place has already ended ...
    2. +1
      31 August 2016 21: 23
      We must not prepare for the past war, but for the future.
      And in Israel, resources are allocated on these topics.

      It is necessary to study and know physics. And not based on science fiction films.
      The basis of ANY processor is a transistor. Which has only two positions. 1 and 0 ..
      What kind of AI can we talk about in principle?
      Programs that sort information by attributes and then use it are not AI. This program. And bosom will never go beyond its functionality.
      In addition to logic, intelligence is capable of drawing conclusions contrary to logic.
      1. 0
        1 September 2016 08: 11
        wrong statement
        I would say that our world is basically discrete.
        You as a person also sort information by signs, I recommend reading about how we see and perceive information.
        The human brain consists of neurons, at the synapses there is either a signal or not (0 and 1).
        Also pay attention to neural networks.
      2. Riv
        +2
        1 September 2016 08: 16
        The line between the expert system and AI is extremely blurred. If intelligence is based on ordinary binary logic, then it should not fundamentally make illogical decisions. This follows from one of the basic laws of logic: anything follows from a false premise. A random decision program cannot be considered an AI.

        However, the solution may be random, but not in conflict with the source data. And if the program is able to: 1) accumulate data 2) make decisions that are not contradictory to them 3) learn from the consequences of their adoption by overestimating the initial data - then it can already be called artificial intelligence. Actually, even types of AI are distinguished, but this algorithm is basic.
      3. +1
        1 September 2016 15: 08
        "Which has only two positions. 1 and 0 ..
        What kind of AI can we talk about in principle? "///

        About the same as in our brains. Where instead of transistors are neurons,
        and instead of wires, nerves and micro blood vessels.
        And the rest is the same: 1 and 0 ...
        1. 0
          1 September 2016 15: 35
          Are you saying that this is exactly the case?
        2. Riv
          +1
          1 September 2016 15: 35
          You are confusing digital and analog computing systems. Meanwhile, these are fundamentally different things. Attempts to simulate the functioning of the human brain on a computer will most likely never succeed. It’s about trying to simulate the Earth-Moon system using two oranges as an example. It will be a bit like, but ... not that.
        3. 0
          1 September 2016 22: 44
          About the same as in our brains. Where instead of transistors are neurons,

          I do not think that my brain is different from yours or any other homosapiens. And there is one problem .. the work of the human brain has not been studied even by a few percent. And you make such loud statements.
          Intelligence ... is, among other things, aptitude for imagination.
          Well, let's .. try to digitize ..this is the most .. imagination. Successes.
          1. 0
            2 September 2016 09: 59
            Any imagination, thought, feeling, emotion. can be described mathematically in three types of parameters — this is the dimension of polarization of contrasting positions and the direction of analysis. Therefore, the brain of all people is the same in structure and mechanism of work, but can fundamentally differ in the mechanism of analysis of events that it comprehends or intuitively, i.e. it automatically measures.
            Show me that humanoid individual. which possesses the highest level of knowledge about the mechanism of the human brain. As far as we know, there are none yet. Not counting those who have abnormal qualities relative to the average person, but who still do not understand the mechanism of perception and analysis of the brain.
            1. 0
              2 September 2016 11: 32

              0
              gridasov Today, 09:59 ↑ New
              Any imagination, thought, feeling, emotion. can be described mathematically in three types of parameters — this is the dimension of polarization of contrasting positions and the direction of analysis. Therefore, the brain of all people is the same in structure and mechanism of work, but can fundamentally differ in the mechanism of analysis of events that it comprehends or intuitively, i.e. it automatically measures.
              Show me that humanoid individual. which possesses the highest level of knowledge about the mechanism of the human brain. As far as we know, there are none yet. Not counting those who have abnormal qualities relative to the average person, but who still do not understand the mechanism of perception and analysis of the brain.

              Get lost text. That is, you are mathematically going to describe what no one knows?
              Well, and what kind of AI can we talk about here? When the human consciousness generates such pearls.
      4. The comment was deleted.
        1. Riv
          0
          1 September 2016 15: 43
          You have strange ideas about hacking computer systems. It boils down to decrypting the contents of the media and that's it. Meanwhile, hacking is more than that, and decryption isn't everything. And no one bothers on this topic. If there is a cipher, then there is a key. And if there is a key, then you can get it from someone, take it away, buy it ... "from an unfeeling body" at last ...
          And nobody will spend a hundred years on "clean" decryption.
          1. 0
            1 September 2016 16: 12
            It is not the code that is hacked, but the "brain" that is the technique of the programmer in whose language the system is written. Although the language implies a double form, the generally accepted methodology for writing programs and the actual style of the programmer. But the conversation is not about that. A computer as a system for collecting and "analyzing" information, etc. acquires the ability to construct an analysis in the literal sense of the term only and only when it possesses all the options for constructing and present events and those that develop in dynamics and in different-dynamic processes of each individual process ... Therefore, now the moment has come when the reality distorted by computers has created a huge problem at the level of all mankind, that people have become inadequate in the perception of reality and the processes of its transformation. I would even say that humanity has created a special kind of virus that destroys a person's ability to see this reality. Therefore, a person now does not even see the problem, thinking that everything just happens by itself.
            1. Riv
              0
              1 September 2016 19: 37
              What ??? Hacked brain? Oh yes, of course. You are from Ukraine. How did I not immediately notice? The brain is hacked, definitely. :)
      5. 0
        1 September 2016 15: 32
        There are very few people who generally understand that the binary code of the transistor and the sequential writing of the code does not allow you to create a program that would not be hacked or to which there would be no access. Tesla had an ideological teacher - Rujep Boškovich. And he said that "if you take the longest poem of Virgil and put it in letters and put it in a basket, then you can always fold these letters out of it so that you get the original poem." Therefore, you are right when you say that you need to know and understand when and how you can build a system that will be open and at the same time will not be able to give access to its initial parameters. This can only be created on the basis of infinite variables of the NUMBER algorithms, which will be expressed by their constant and unchanging functionality. Therefore, it is impossible to create something on the function or properties of the number that are used now that cannot be "hacked". This means that the transistor must be multipolar, and not on a binary function.
        Reply Quote Complaint More ...
      6. 0
        1 September 2016 17: 45
        I will not say about the rest, but with a transistor (with it) for "two positions" you clearly got excited hi
        1. 0
          1 September 2016 18: 06
          We are still talking about the binary functionality of the transistor. Namely, the question relates to the fact that a person does not own the method when an impulse can be associated with a number. Moreover, without associating the number itself with any symbols, etc.
    3. 0
      1 September 2016 13: 36
      YOU A compliment! Artificial intelligence is something that people don’t even imagine, and if they are imagining their imagination, then only to the level of their own abilities. Everything will be much more tragic. Even just expanding the imagination will immediately give the opportunity to create and implement such activities that for others will be inevitable death. Now everyone is laughing and fooling around while the established relationships are around and the balance of forces is maintained. But artificial intelligence, even in its initial form, makes it possible to divide people according to the intellectual level, and therefore opportunities. And certainly then there will be no separation according to its military and combat use or civilian. Artificial intelligence is a system of neutral values ​​in setting guidelines and analysis, but also methods of solving problems.
  7. 0
    31 August 2016 16: 43
    Quote: nekot
    Well, the fact that the aforementioned Work is a stupid person (or mows it down as such for journalists) does not in the least diminish the fact that the Americans are going to speed up their work on AI in the military sphere. That way, they are being developed before Skynet, the benefit of the script has long been there, it remains only to bring the ideas of the Terminator to life))

    They are very limited in their professional field! They bury it. A step to the side confuses them.
    1. +1
      31 August 2016 19: 41
      meriem

      Being smart under capitalism is fraught with suicide.

      You cannot be smarter than your boss, otherwise the boss will fire you. And since you don’t know how to do anything else, you cannot retrain. Stupidly not enough resources, money. That’s the best way out.

      Therefore, under capitalism, smart people are killed. Competition conditions.

      Well there is a way out. Become a boss. Then learn to kill. You have to kill yourself. And no AI will help you. Because giving him the right to kill, he automatically becomes above you.
  8. +1
    31 August 2016 16: 47
    The Americans in their repertoire themselves will come up with a horror story and will certainly begin to fight it. What kind of people are so wonderful? fool
  9. +1
    31 August 2016 16: 48
    You can’t cram the uncanny. Not with money, not without money. Yes
  10. +1
    31 August 2016 18: 59
    A lot of controversy, a lot of articles on this topic. The most calm and reasonable article, in my opinion, is this one:
    http://dic.academic.ru/dic.nsf/ruwiki/1423
  11. +5
    31 August 2016 20: 20
    No, well, is it really not clear where to put all this
  12. +1
    31 August 2016 20: 55
    After that, we don’t even talk about cutting. Kindergarten, younger groups

    Kindergarten number 38 "Alyonka". We were shown filmstrips there. I always thought that the pictures move by themselves ... And it looks like ... The teacher moved with her intellect!
  13. 0
    31 August 2016 21: 23
    Meanwhile, Moore’s notorious law has ceased, or will soon cease to be in effect — the microminiaturization of electronic components has reached the limit where quantum effects are already affecting. But AI is not there, and it is not known whether it will be.

    It seems that the USAis again going to push the Pentagon budget unclear what. And well, they would earn money - but they just draw it.
  14. +3
    31 August 2016 22: 09
    To whom does the author advise to acquire at least ordinary intelligence?

    The country, which is the source of all real innovations in computer technology, owns 100% system software and is ahead of all in terms of development, including in the military sphere, for decades ...

    You need to be more serious about your sworn partner, and conclusions should be drawn from such reports after reading them first.

    Happiness and ease in thoughts is extraordinary ...
    1. +1
      1 September 2016 08: 21
      agree with akudr48
      AI in the form that they want to develop further is the most correct solution in current realities. It is the terminator closest to this (adaptive machine), but skynet is not ... there is self-awareness.
      What they want to do is a mechanism for identifying and neutralizing a threat according to a clear principle. Now, programs need to know the exact signature of the threat to identify it, but the world does not stand still and the amount of incoming information increases by an order of magnitude, thus finalizing programs for accurate identification becomes too expensive. Therefore, an adaptive network is the most economically viable solution.
      I think ours should also develop neural networks to control electronic warfare, so in vain many perceive this so maliciously :)
      1. 0
        1 September 2016 13: 48
        Neural networks must also have algorithmic connections. Otherwise, you will carry with you a brain factory on processors and, in addition, a power plant.
  15. Riv
    +1
    1 September 2016 08: 05
    Hypersonic killer robots ... Ahhhhhh! Wefseumrem !!! 111
  16. 0
    1 September 2016 13: 55
    The author in his total ignorance forgot to exclaim.
    "Cybernetics, the venal wench of imperialism" laughing
  17. 0
    1 September 2016 19: 42
    RivThere is a pattern. As soon as they do not want to think, they immediately switch to the flag. I’m not all of Ukraine to you. Or do you have a mind just enough to consider everyone ...... By the way, in an environment of complete inadequacy, you perceive more clearly who is who
  18. 0
    1 September 2016 21: 21
    Dear, turn on X, the most advanced computer at the moment, covers an area of ​​150 meters SQUARE, but is not an AI.
    1. 0
      1 September 2016 22: 10
      At least one person understands that the whole problem and solutions are in a mathematical basis, and not in binary processors. In this case, the mathematical process requires an appropriate processor as an intermediate device transforming the code in operation. This will solve the problem of increasing energy consumption for the transmission of a single code in the form of a pulse. Because, in addition to significant areas, there is also the issue of energy security. Well, since no one wants to think about the possibilities of complex solutions, individual solutions will not give aggregate progress
  19. 0
    2 September 2016 15: 08
    dvina71,
    dvina71,
    Firstly, if someone does not know something, this does not mean that this does not exist. Secondly, this does not mean that others do not know this. And thirdly, AI will invent the human brain, which means that it should be more perfect at the core. Because in AI there should not be idle algorithms of system guidelines, relative to which the analysis will be carried out. After all, a person builds an analysis regarding his morality, ethics, goals that motivate him, etc. I think so. what ask rep, what are system guidelines, these are exactly the transforming forms of morality of ethics, etc.
    1. +1
      2 September 2016 19: 30
      All your arguments in the series - the ingredients of interpolation covalents are always adequate.