The US is concerned about the likelihood of finding a fighter with artificial intelligence of a "common language" with a similar enemy

119

Not so long ago, the results of combat simulations were published in the United States, in which pilots of F-16 fighters and "aircraft" with artificial intelligence participated in the mode of using computer simulation. This is a Pentagon-funded Alpha Dogfight competition. Voennoe obozreniye spoke about this competition in one of its recent materials.

The presented results of combat simulations brought the creators of artificial intelligence (AI) systems in the United States into a state of real euphoria. The fact is that it was fighters controlled by artificial intelligence that won in all five virtual battles. Representatives of the company Deepmind, which is implementing the AI ​​project, including in the interests of the US Air Force, note the following:



The experiments exceeded all expectations. And they pave the way for the widespread adoption of artificial intelligence systems in combat aircraft.

At the same time, the results of experiments and statements by representatives of the developer company caused a certain negative from a number of American military experts, including retired military pilots. Thus, it is noted that the total introduction of artificial intelligence systems in the US Air Force can lead to a number of negative consequences. One of them is the reduction in the number of personnel of professional pilots with the simultaneous need to optimize the military academies where pilots are trained. Second, the machine execution of a combat mission can be justified by the speed of decision-making (the computing activity of a machine that surpasses the speed of human brain processes), but at the same time artificial intelligence may face an unsolvable task if it turns out to be non-standard for it.

The main message: artificial intelligence will perform any flight or combat mission based on a machine, algorithmic approach. On the one hand, this allows you to reduce the risks from the notorious human factor, but on the other hand, it can interfere with creative, non-standard actions when performing a combat mission. So far, such tasks are beyond the power of AI.



An additional difficulty is the fact that today there is no unambiguous information about how fighters with AI will behave if they encounter similar enemy fighters - also equipped with artificial intelligence systems: will it not turn out that the planes with AI will "find a common language"? If there is even a minimal likelihood of such a turn of events, then it jeopardizes all planned operations. This is feared in the US expert community.
119 comments
Information
Dear reader, to leave comments on the publication, you must sign in.
  1. +18
    14 September 2020 09: 28
    The US expert community and I believe that Skynet is a threat to people wassat tongue
    1. +3
      14 September 2020 09: 58
      Experts from Hollywood ...
      Human-made artifacts (including robots) must accurately perform their intended function. The imperative of any robot, with brains or a dumb controller, is to complete the task. Any wishlist, such as chatting, is below the imperative. The reality and the plot of Hollywood is like water and oil, they don't mix ...
      1. +12
        14 September 2020 10: 11
        What nonsense about "find a common language."
        The computer, for all its ever-increasing complexity of algorithms, remains a dumb piece of hardware. There is no real AI capable of reprogramming itself by changing tasks.
        Well, the logic of the UAV is generally at the level of computers in the early 90s. That is, it requires a clear algorithm.
        Another thing is a virus attack. If the UAV control logic allows you to take over control, then there will definitely be attempts. How will the methods of countering this develop. But this is not "agreed", this is an elementary "takeover of control".
        1. 0
          14 September 2020 11: 14
          Let them worry that our AI is promoting their AI. We're better off. bully
          1. +9
            14 September 2020 11: 34
            As in a bearded anecdote from Nikulin:
            A nuclear missile flies from America to Russia, and a Russian one flies back. They meet ... Well, the Russian persuaded her colleague for a meeting, then for a victory, then for world peace ...
            American says: "Something I am completely drunk ... will not reach the goal"
            The Russian embraces her in a friendly way and says: "Come on, I'll take you home."
            1. +1
              14 September 2020 11: 35
              Good anecdote! To this topic. drinks
            2. +2
              14 September 2020 13: 35
              I also remembered this anecdote, and this is exactly what they will get rid of.
              1. +2
                14 September 2020 19: 22
                Quote: screw cutter
                I also remembered this anecdote

                I think a good half of the members of the forum remembered! laughing
        2. 0
          15 September 2020 02: 57
          Quote: Shurik70
          What nonsense about "find a common language."
          The computer, for all its ever-increasing complexity of algorithms, remains a dumb piece of hardware. There is no real AI capable of reprogramming itself by changing tasks.
          Well, the logic of the UAV is generally at the level of computers in the early 90s. That is, it requires a clear algorithm.

          So that's what we are talking about, they are not talking about the present, but fantasizing about the future. So, there is something in this. A piece of iron without emotions but with logical thinking will first ask the question - Why do I need this? There is no homeland, I don't care about money, so why fight вать
        3. 0
          15 September 2020 11: 03
          AI is precisely what they program themselves undergoing a kind of training, moreover, their use in weapons systems involves the processing of large arrays of information coming from outside through various, including command channels. And yes, they are trained to communicate. exchange them.
    2. +1
      14 September 2020 22: 23
      Skynet is a threat to people ... AI will take over the world ... This is fantastic, even generally unscientific wink

      I would put it another way: the US military pilot community is afraid of losing their jobs, pensions and other benefits.

      Truck drivers, for example, speak very similarly about self-driving vans. There, everything has already gone noticeably further, there are automated routes, and in other places the columns go with one leader and several slaves ...
      So there you go! They don't like, they hate the chauffeur am
      1. 0
        15 September 2020 08: 23
        Simplify the picture.
        I saw a video, our combine operators are queuing up, wanting to work on a machine with an "autopilot robot", most of the combine operator's work is a routine such as steering along the cutting edge. People get tired less, pay more attention to the quality of cleaning, some pluses.
        1. +1
          15 September 2020 10: 34
          I agree about combiners.
          In the summer I talked with a farmer I know in the Tver region. I noted the perfectly even sowing of potatoes, it just doesn't happen so clearly ...

          He just praised the new software and controllers for tractor positioning, the automatic control system. He said that the system is not entirely automatic; it first requires a reference track from a person (a record of coordinates, which is then reproduced). That is, the tractor will reach the regional center by itself, but "blindly", by satellite. There are no cameras, so everything in the way will blow. And when it goes out into the field, the tractor is put into the furrow, and then it plows and plants everything clearly in parallel (but all turns and visits are done manually).

          He said that he uses the "Blarus" tractors, they are cheaper. For my own personal comfort, I would buy a John Deere, but his hard workers plow, it is better and more profitable 3 times cheaper, they are accustomed to our equipment. Here is such an old friend of mine, farmer Vasily, the former chairman of the collective farm bully
          1. 0
            16 September 2020 05: 49
            When I first saw a tractor driver at work sitting relaxed, his arms folded, I was a little freaked out. Impressive smile
  2. +4
    14 September 2020 09: 29
    And in what form can "creativity" be expressed when performing a combat mission?
    At the moment, for example, computer systems are already unambiguously beating people in chess and other complex games.
    Indeed, for the car:
    The speed of decision making in the presence of the appropriate input information is definitely faster.
    The permissible overload during maneuvering is much higher than for a person.

    Perhaps the intuition aspect.
    But this is also a controversial point.
    So what kind of "creative" can a pilot do?

    Well, if, to combine the machine control algorithm on board and remote control in real time.
    1. +9
      14 September 2020 09: 49
      Quote: Livonetc
      Perhaps the intuition aspect.
      But this is also a controversial point.
      So what kind of creative can a pilot do?

      For example, quickly change the task. Alternatively, we went to the explored target of the strike, it turned out to be a mock-up, the actions of the AI ​​to strike at the reconnoitered coordinates. The option of the pilot's action is to carry out additional reconnaissance and, if detected, a strike on the additional reconnaissance target ...
      Option to perform the task by type. what is more important ... for example, winning an EO with an enemy covering an object or disregarding anything to destroy an object, it is difficult to set a choice for artificial intelligence and in real conditions decide which is more important ... (how, for example, does an AI decide to ram?) Yes, there are many non-standard tasks that are difficult to write programmatically. Of course, real AI is still very far away, and machine learning works where it can be trained. The same is the difference between the pilot who has the experience of the DB and the pilot who has such, although, for example, he is better able to perform aerobatics. Well, something like that. hi
    2. 0
      14 September 2020 09: 50
      Quote: Livonetc
      Indeed, for the car:
      Speed ​​of decision making ...


      The car doesn't make a decision.
      And he will never be able to make a decision.

      The machine works according to an algorithm.

      The algorithm was created by man.

      It was the person who made the decision how to act for the machine in this or that case and created the algorithm.

      Artificial intelligence also works with an algorithm.

      The question is in these algorithms. Which programmer will create the best algorithm for certain tasks?

      The decision is always made by a person.


      If you are talking on the phone with the robot Alice, Vasya, Petya or some other record and think that someone is really talking to you, then you won a prize fellow - You are on the right path to freedom from the mind.

      The press does not make the decision to extrude the form.
      Just like a CNC machine does not make decisions about which tool and in what sequence to machine the part.
      These are all human-made algorithms. Created by man for the needs of man.
      With a man's vision of this world.

      Artificial intelligence is a scam.
      New buzzword.
      How the word new is changed with enviable regularity - newest, innovative, etc.
      1. +2
        14 September 2020 09: 54
        The pilot, when performing a postal task, also acts according to a certain algorithm.
        And it can only produce some recombination of standard actions.
        No more.
        Unmanned combat aircraft is an inevitable future.
        A person can perform remote control.
        1. -1
          14 September 2020 12: 56
          Quote: Livonetc
          The pilot, when performing a postal task, also acts according to a certain algorithm.

          So.
          Any operator MUST act according to the algorithm.
          The army lives according to the Charter. This is an interaction algorithm.

          Only algorithms are written by humans.
          Humans tend to make mistakes.
          Another person might do otherwise. The result of this "otherwise" will be either a victory or a tribunal.

          The machine cannot do otherwise. Never.
          The machine will act as the person intended.
          Including possible deviations. Everything was previously thought up by a person or a group of persons.

          And man is not God, he cannot know what will happen.
          A person can only guess.
          The machine can very quickly, insanely fast, choose from options.
          And the options were previously prescribed by a person.

          And even a group of persons is not God.
      2. 0
        14 September 2020 10: 13
        Quote: Temples
        The car doesn't make a decision.
        And he will never be able to make a decision.

        The machine works according to an algorithm.


        The machine is constantly making decisions. The work on the algorithm does not contradict this in any way.
      3. +6
        14 September 2020 10: 30
        "The machine works according to an algorithm ...
        The algorithm was created by man. "///
        ----
        This was the case until about 10 years ago.
        But at that time they invented machines that themselves (without participation
        human programmer) write machine code for themselves,
        right in the process of completing a task.
        The programmer only writes basic minimal code.
        To make it clearer: the drone flies on a mission - its software
        code 100,000 lines. The drone lands after the mission - its software
        the code is already 150,000 lines. Several combat missions - and its code will reach a million lines.
        Note: the programmers haven't touched the keyboard since the first flight.
        This is called specialized AI. Self-learning program.
        1. +1
          14 September 2020 10: 33
          Quote: voyaka uh
          To make it clearer: the drone flies on a mission - its software
          code 100,000 lines. The drone lands after the mission - its software
          the code is already 150,000 lines.


          lol

          This is not how it works. And it is unlikely that it will ever work like this - no one will allow the robot to reprogram itself.
          1. +2
            14 September 2020 10: 39
            This is how it works. The robot will reprogram ITSELF.
            The first robot to beat Kasparov in chess was arranged
            usually: a large database and an enumeration of options. The code was written by programmers.
            The robots that are beating champions now are built on the principle of AI.
            They create their own database. At first, in the first training games,
            they don't even know E2-E4.
            And, most importantly, the second robots (AI) smash the first ones in matches with each other.
            1. +1
              14 September 2020 10: 53
              Quote: voyaka uh
              This is how it works. The robot will reprogram ITSELF.


              Not. You are confusing the fundamental possibility of the robot reprogramming itself and reality.

              Quote: voyaka uh
              The robots that are beating champions now are built on the principle of AI.


              There is no "AI principle". There are many different approaches that are summed up in the vague term "AI".

              Quote: voyaka uh
              They create their own database.


              You are confusing code and data. And even an independent addition of a combat drone to a database on a mission is extremely unlikely for obvious reasons (the danger of malfunctioning).
              1. +1
                14 September 2020 10: 59
                "You are confusing code and data." ///
                ----
                AI itself writes source code and itself, using this code,
                fills in database.
                1. +2
                  14 September 2020 11: 02
                  Quote: voyaka uh
                  AI writes source code itself


                  You are confusing principled possibility and reality. No AI used to control real objects even writes rules to itself, let alone source code.
                  1. 0
                    14 September 2020 11: 15
                    This is not only reality, but not even something particularly new.
                    It was a real sensation 10 years ago.
                    By the way, I didn't believe it either. drinks
                    Now the field of application is expanding every year
                    self-learning programs. We have already reached the fighters. fellow
                    1. +2
                      14 September 2020 11: 18
                      Source code generation is not a sensation. Genetic algorithms, self-learning, too. But self-learning of drones in real time is, yes, a sensation. Can you share the link? fellow
                      1. 0
                        14 September 2020 11: 24
                        Find on this site:
                        "Artificial intelligence fought with a real pilot: the US Air Force conducted a simulated air battle."
                        There is a long video.
                        Human vs. AI fights - at the end of the video. Five training close combat.
                      2. +2
                        14 September 2020 11: 26
                        I read articles describing algorithms. There is no source code generation there. Even self-study is there, let's say, within a rigid framework.
                      3. 0
                        14 September 2020 11: 28
                        It is not my task to convince you. drinks hi
                      4. 0
                        14 September 2020 11: 30
                        Similarly good
                      5. +1
                        14 September 2020 12: 47
                        smile lol wassat
                        Here! Two "AI", even here they cannot agree.! .-, what can we say about the battlefield.?. laughing
                      6. 0
                        14 September 2020 12: 49
                        If you took the title of this article seriously, please accept my condolences.
                      7. 0
                        14 September 2020 13: 20
                        Quote: Eye of the Crying
                        I read articles describing algorithms. There is no source code generation there.
                        it is for the UAV that I see no point in "generating the source code", as well as full-fledged AI, there are more than enough AI elements. Not so difficult tasks face the drogs.
                      8. 0
                        14 September 2020 13: 21
                        What are we talking about. Generation can rather be dangerous.
                      9. 0
                        14 September 2020 13: 27
                        It makes no sense, it's simple, and it's expensive, you need additional server power on the drone. For systems a la "skynet" it is necessary. But this is the next step, a dangerous step.
                      10. 0
                        14 September 2020 13: 30
                        Server capacities are needed rather for evaluating changes. And without assessment (including by people), no one will allow making changes to the database (BZ) of a combat drone, especially in its code.
                      11. 0
                        15 September 2020 05: 22
                        That's right, the AI ​​cannot self-learn uncontrollably, otherwise it learns so very quickly to the question, but what for I will do it for free, but for paid then what for me your money?
        2. 0
          14 September 2020 11: 25
          Does anyone analyze this self-written code? From the point of view of preventing unwanted and potentially dangerous fragments from appearing in it?
      4. +1
        14 September 2020 10: 44
        Quote: Temples
        It was the person who made the decision how to act for the machine in this or that case and created the algorithm.

        One algorithm is the operation of an automated device. The simplest AI has a huge base of algorithms and can choose the optimal one based on the calculated result. A strong AI can build algorithms by itself.
        Quote: Temples
        The decision is always made by a person.

        A person can instruct the AI.
        Quote: Temples
        If you are talking on the phone with the robot Alice, Vasya, Petya or some other record and think that someone is really talking to you, then you won a prize

        Chatting is not the purpose of AI. They just digest all the conversations that happen on the net. And now he is still looking for strange movements.
        Quote: Temples
        The press does not make the decision to extrude the form.

        The press does not accept ... But to create a shape, for a specific press and part, the shrinkage of the material is automatically calculated and the dimensions are changed ...
        Quote: Temples
        In the same way, the CNC machine does not decide which tool and in what sequence to machine the part.

        Well, here you are completely ... Who do you think indicates the drilling, milling tools, .... if a person does not do this, when he just creates a drawing of a part.
        Quote: Temples
        Artificial intelligence is a scam.

        You should read on the topic, dear writer.
      5. 0
        14 September 2020 19: 38
        Quote: Temples
        Artificial intelligence also works with an algorithm.

        I think that AI should only gain experience according to the algorithm (given by a human). And it will act according to an algorithm that has already been invented by itself, based on the experience gained. As in the anecdote above - getting drunk, returning "to the base", and not looking for adventure on your nozzles. laughing
    3. +3
      14 September 2020 10: 31
      Your typical misconception is that machines think faster. This is not so - all the vaunted AI and neural networks are regression algorithms. Those. stupid algorithm (because it was deduced not by a man, but by a machine, based on 100500 parameters) which discards options as they do not meet the specified conditions.

      It's good for static data, for example, to recognize captcha - it does not change. But the combat situation changes every second (if the enemy is not very smart, of course [how hard it is to bypass topvar's autochange]). And this is where the fighting AIs [bad word] with [another bad word]. Yes, they can show results against the doctrine on which the states rely - where the plane flies in stealth, fires back outside the antiaircraft defense and goes into the sunset. But this is an unrealistic fight.

      Show the results of the AI ​​battle against the same "swifts" - they will bang and dry up any AI in any scenario. Moreover, they will be given exactly at the decision-making stage - stupidly by changing the combat situation every second, forcing the AI ​​to fall into the calculation of the combat situation from scratch. Well, AI does not know how to make a serious batch. Well, no way.
      1. 0
        14 September 2020 10: 46
        "Show the results of the AI ​​battle against the same" swifts "- they will kick and dry any AI in any scenario" ////
        ----
        The ace pilot (instructor pilot) of the F-16 aircraft thought the same when he began the battle with
        AI controlling the same F-16. Watch the full video of these fights.
        When he was cleanly shot down in close combat the third time, he was already nervous.
        After the fifth defeat, I sat as if lost.
        1. 0
          14 September 2020 10: 55
          Quote: voyaka uh
          Show the results of the AI ​​battle against the same "swifts" - they will knock and dry up any AI in any scenario "////
          ----
          The ace pilot (instructor pilot) of the F-16 aircraft thought the same when he began the battle with
          AI driving the same F-16

          I agree ...
          The first chess games against AI were lost by world-class grandmasters a long time ago. Even before the creation of the first generation IBM in civilian use based on Windows 93-95.
          Prescribing a program of evasion and air combat as such will not be difficult.
        2. 0
          14 September 2020 11: 44
          Saw. A pilot tried to fight another pilot. He didn’t realize that this was a machine — a stupid, algorithm-limited machine. Even if it counts faster. In chess, the one who rules the position for the greater number of moves rules.

          It's not like that in battle. People are not stupid [hate topvar autocorrect], especially when their lives are at stake.

          PS: Even in strategy, AI is performing very poorly. Watch the AlfaStar duels against live gamers in SC2. The neural network drags exactly until the moment when the opponents understand that a bot is playing against them. Smart, armed with thousands of previous games, but ...
          Literally 5-10 games and all the vaunted AI is drained into the toilet. And this is a strategy, not a tactic, where the situation changes every second.
          1. +2
            14 September 2020 11: 57
            “The pilot was trying to fight another pilot.
            He didn’t realize that this was a machine — a stupid, algorithm-limited machine.
            Let it count faster "////
            -----
            1) The pilot knew perfectly well that he was fighting against the machine.
            2) He used his entire arsenal of experience, tactics and tricks.
            It was an instructor pilot with thousands of hours in the air,
            teaching young tactics.
            3) and he lost to a DRY car.
            1. -2
              14 September 2020 12: 10
              This is the pilot's problem. He simply did not know what to oppose to the enemy who knows the same thing as he does, but "thinks" faster. This is not an indicator at all.

              The only area where AIs have really fought humans, in hardcore, is in computer games. And as a result, we have not a single game where the Artificial Idiot was not humiliated even on nightmares. And I'm not talking about modern casual misunderstandings, but about the games of the early XNUMXs.

              Try to play Q3 against the same afterwards masters taking one shot at the spawn. But they will only one-shot you - old gamers roll them out in a thin layer without really noticing. So where is the power brother?
              1. 0
                15 September 2020 11: 17
                The only area where AIs have really fought humans, in hardcore, is in computer games. And as a result, we have not a single game where the Artificial Idiot was not humiliated even on nightmares. And I'm not talking about modern casual misunderstandings, but about the games of the early XNUMXs.

                Try playing Q3 against the same afterwards masters taking one shot at the spawn.

                I think that the problem is your extensive experience in games like Q-3. Try to play DCS (from Eagle Dynamics) for a change, your experience there will be of little use.
        3. 0
          15 September 2020 05: 31
          So the same is an American pilot, and the Americans, as you know, well ..... Why did he climb into battle 5 times with the same input conditions? It would be possible to arrive at the meeting point early, or not at all this time. I would see how the AI ​​will win a fight against a person to whom the person did not even show up. Well, or when the ai will start attacking while accelerating along the lane. Tell me not honestly, not according to the rules! And where did you even see the war where everything is fair and according to the rules
          1. 0
            15 September 2020 11: 19
            Ok, Counter offer: you take off from a runway that is already under attack how will you survive? With a high probability - nothing.
            After all, you begin to invent unequal conditions in advance.
            1. 0
              15 September 2020 19: 31
              And why take off with GDP when it attacks? At this moment, the air defense should work. And what does the unequal conditions have to do with it? Let’s then tell the enemy, why is your plane, faster / more maneuverable / more armed. Unscrew all unnecessary we will fly honestly, on an equal footing, and if the pilot also withstands a large overload, then he must hit himself on the head with a mallet, otherwise the same is not fair. There are no equal conditions when and where. Even in chess, one had more time to prepare, the other had a better coach, and the third had more brain weight.
              1. 0
                15 September 2020 23: 14
                There are no equal conditions when and where. Even in chess, one had more time to prepare, the other had a better coach, and the third had more brain weight.
                Reply

                However, flying hours and training do matter for human pilots, right? (and no one talks about the conditions, everyone is trying to teach better) As well as the ability to withstand overloads during maneuvers (disconnected - died).
                And by starting to complete the mission (gaining air superiority), the best pilot will gain an advantage. This is what we are talking about. Although he may die "on the ground."
                If the runway is bombed, then the air defense is suppressed (this happens). For example, there is a chance to take off right now (one of the entire runways so far), you are given such an order. Will you refuse?
                1. 0
                  16 September 2020 15: 01
                  Now it may be yes, but after a few seconds no, and after another five seconds again yes. And I will see that in the sky five enemies of my friend are chasing one ascent even when not. I do not argue, perhaps theoretically it is possible to create so complex an algorithm that will take into account all the variety of nuances that experienced pilots take into account. But firstly, the creation of this algorithm will take just some kind of breakthrough in time, and secondly, I'm not very sure that a supercomputer will be thrust into each UAV that will pull this algorithm.
                  1. -1
                    17 September 2020 05: 54
                    And I will see that in the sky five enemies of my friend are chasing one ascent even when not.

                    Take off when you receive an order. Amateur performance will not be appreciated.
                    Now it's not WWII, you won't be able to see how 5 planes are chasing your friend. Yes, and he has already been shot down, against 5 opponents with BVB missiles flying with 60g overload (even if he has time to take someone with him).
          2. -1
            16 September 2020 19: 01
            Why did he go into battle 5 times with the same input conditions?

            Because this stage of the tests assumed such a scenario as an option.
            I do not argue, perhaps theoretically it is possible to create so complex an algorithm that will take into account all the variety of nuances that experienced pilots take into account. But first, the creation of this algorithm will take just some kind of break in time.

            Work has been going on for a long time. Progress is greatly accelerating every year in our world.
            that in each UAV will be thrust a supercomputer that will pull this algorithm.

            For this, there is no UAV yet. Why put it in Avenger?
            Perhaps there will be an unmanned version of the F-16.
            1. 0
              17 September 2020 00: 51
              Yes, you can put it even on the An-2. The question is how much it will cost, and whether it will fit in at all, there, after all, a rather sickly cooling will still have to be worked on collective farms.
              1. -1
                17 September 2020 05: 28
                An-2 cannot conduct a maneuverable battle. Like Avenger, they are created for other tasks.
                Cooling is not a problem when flying.
      2. -3
        14 September 2020 10: 46
        On the contrary, they will calculate the whole situation from scratch much faster than the pilot will understand what happened in general to begin to understand the new situation.
        1. -1
          14 September 2020 11: 53
          It doesn't work like that))

          Any person calculates the situation iteratively. Those. we always perceive every action as a consequence of another action. By the way, fraudsters often use this when they throw people for money.

          A computer can't do that. He cannot connect cause and effect (Merovingen approves). Those. instead of relying on the previous situation, he is forced to roll back to the previous moment, which he can uniquely identify. And here slow human brains have all the interesting places, all sorts of silicon products.

          Well, machines do not know how to cause and effect. Well, nothing at all - no one taught.

          PS: For example, some comrades put billions on face recognition systems, and as a result, all mega-cool algorithms cost two stripes on their face.
          1. -3
            14 September 2020 12: 22
            No computer also takes experience and knowledge of the past situation.
            Well, the disguise was originally invented against people. It is simply an application of camouflage ideas against robots. They look at the world differently and have different rules of disguise. A heavily disguised fighter merged with the terrain for them is ideally visible. But the correct selection of stripes in the right places can make a person invisible. True, for this you need to have this AI in the hands of the one who makes the disguise. Since different AIs need different disguises. What works against one does not work against the other. Which will ultimately lead us to the block of voting recognizing AIs.
            1. 0
              14 September 2020 12: 42
              And again, not entirely correct understanding of the situation. The problem is not in the two matched strips - the problem is in the limits of the algorithm's applicability.

              Under ideal conditions, it can work well (not ideal, the neural network will never be able to provide 100% of the result, this is in its essence), but as soon as the weather changes, for example, the season changes, the entire trained neural network turns into a pumpkin. And even to hell with them, with the seasons. Banal rain - the difference in precipitation in one tenth of a millimeter can reset all algorithms. Even flares in the sun can bring funny surprises.

              No computer also takes experience and knowledge of the past situation.

              For a static, unchanged in a long period (for a machine) situation, of course. And if we take an air battle - it is banal to throw up two seconds of the battle and the enemy went into continuous clouds and turned his face towards the enemy and he is not visible - what to do? What to take for the initial situation? We'll have to calculate a new starting point and there is no guarantee that during this time an air-to-air rocket will not arrive at you.
              1. -3
                14 September 2020 12: 54
                1) You are talking about expert algorithms, it doesn't work a little to the left and to the right. Neural networks specialize in all kinds of uncertainty.
                2) The game with closed information is also a feature of neural networks. They calmly deal with the fog of war
              2. +1
                15 September 2020 11: 23
                but as soon as the weather changes, for example, the season changes, the entire trained neural network turns into a pumpkin. And even to hell with them, with the seasons. Banal rain - the difference in precipitation in one tenth of a millimeter can reset all algorithms. Even flares in the sun can bring funny surprises.

                It's like wanting to believe smile
                1. 0
                  15 September 2020 20: 05
                  At this stage, this is easy enough to refute. You take your pseudo AI (which someone has already created) and train him to recognize a banal captcha, and I, not being a specialist in this, invent three new captcha interactions. And at least one of them your superpurer AI will not be able to recognize. And mind you, I do not take into account the time that will be spent by you and me on preparing this experiment.
                  1. 0
                    15 September 2020 23: 19
                    coming up with three new captcha interactions.

                    Make up a new rain, dream, wind? Maybe new principles of movement (the ability to instantly pick up / drop off speed)?
                    Will not work. Customizing examples smile
          2. 0
            15 September 2020 11: 21
            A computer can't do that. He cannot connect cause and effect (Merovingen approves). Those. instead of relying on the previous situation, he is forced to roll back to the previous moment, which can uniquely identify

            Do you know what a trainable AI is? And he is being chased through thousands of "previous moments." So, he just has something to rely on.
    4. 0
      15 September 2020 11: 10
      Creativity in this case involves making a decision not to the maximum in a specific situation, but based on a larger number of current parameters than the program provides, say, from an assessment of the state of the aircraft and the work of others, including the enemy pilots at that moment.
  3. +2
    14 September 2020 09: 30
    Well, here's Skynet on the way. what
    It remains to wait until John Connor grows up ... so that this AI will be pushed into the stove. smile
    But seriously, I do not believe that AI will surpass the human mind in a critical situation ... AI does not have intuition, fear and a sense of self-preservation ... however, as probably the creators of military AI.
    1. -3
      14 September 2020 09: 33
      Intuition is already there, fear and a sense of self-preservation is also not difficult to prescribe. But if the military needed it, the cowards would be taken away. But no, on the contrary, you need the brave and ready to spit on the instinct of self-preservation and go on the attack, when the instinct squeals, run home.
      1. +2
        14 September 2020 09: 38
        But no, on the contrary, you need the brave and ready to spit on the instinct of self-preservation and go on the attack,

        Yes Yes... what through a minefield or forward to machine guns without suppressing them ... and do not complete the combat mission and kill yourself for the smell of tobacco.
        Courage is good for a movie ... but in life you have to survive and destroy the enemy while doing so.
        1. -3
          14 September 2020 09: 51
          You are already describing the stupidity of the commanders. And not the mistakes of the soldiers. Soldiers don't make decisions about where, when, or what to do.
          1. 0
            14 September 2020 09: 55
            You are already describing the stupidity of the commanders.

            The commander, for example, may not know about the presence of a minefield ... the enemy may quietly place it before the battle ... and there can be many such unpredictable situations.
            1. -3
              14 September 2020 09: 59
              If the commander did not know about the presence of a minefield, this is already a reconnaissance mistake. Etc
              And here I do not understand fear and the instinct of self-preservation.
              1. 0
                14 September 2020 10: 01
                And here I do not understand fear and the instinct of self-preservation.
                Well, imagine a unit of soldiers, performing a combat mission, will crawl into a minefield ... several people will blow up ... if you go bravely further, the rest will blow up ... what to do? ... after all, there is a soldier's military duty and honor, and no one canceled the commander's order.
                1. -3
                  14 September 2020 10: 10
                  The commander will give the order to retreat or to continue the offensive, depending on the situation.
                  And then the enemy laid two mines, and bang and the regiment retreats. Disrupting the general offensive on a large sector of the front.
                  1. 0
                    14 September 2020 10: 13
                    And the commander was killed by a mine ... what there is no connection ... what to do then? ... and the enemy is pouring lead from machine guns ... such situations arose in Afghanistan when our soldiers fell into the traps of spooks. hi
                    1. -3
                      14 September 2020 10: 15
                      The commander has a deputy, and goes further down the hierarchy to the greenest recruit.
        2. +1
          14 September 2020 11: 03
          Yes, yes ... what through a minefield or forward to machine guns without suppressing them ... and do not complete a combat mission and kill yourself for a sniff of tobacco.
          Courage is good for a movie ... but in life you have to survive and destroy the enemy while doing so.

          You at least read the memoirs of rank and file soldiers and junior commanders.
          a unit hitting an UNEXPECTED minefield survives only by moving forward!
          Machine guns? To suppress undetected firing points, there is always its own firing point.
          1. +1
            14 September 2020 11: 07
            Was reading hi ... the fear of stepping on a mine over the soldiers was very strong ... besides, I saw quite a few videos where a person's leg or foot was torn off by a mine ... as a rule, a soldier had to be rescued urgently, and this is a minus a couple more people ... so mines it's very scary.
            1. +2
              14 September 2020 11: 43
              Of course scary. To attack is VERY scary. But we have to. And the orderlies take care of the blown up fighter in the attack, and not those who are next to the attack. This is also a matter of organization.
      2. +6
        14 September 2020 10: 39
        Quote: BlackMokona
        Intuition is already there, fear and a sense of self-preservation is also not difficult to prescribe.

        Yep ... on testing "smart bomb" the bomb refused to leave the plane. © smile
  4. -3
    14 September 2020 09: 30

    An additional difficulty is the fact that today there is no unambiguous information about how fighters with AI will behave if they encounter similar enemy fighters - also equipped with artificial intelligence systems: will it not turn out that the planes with AI will "find a common language"?

    What is this complete nonsense?
    Before the AI ​​was pitted against the pilot, the AIs held an entire tournament among themselves to identify the best AI. Which was put up for battle.
    As if there are not many bot tournaments for different games. And modern AI training is usually based on evolutionary algorithms where billions of slightly modified copies are fighting for survival. The best ones give a new set of modified copies, the worst ones drop out.
  5. +1
    14 September 2020 09: 30
    Yes-ah! How sometimes science fiction influences modern consciousness. But if science fiction writers "foresee" ..... Hovays, who can. "They" will agree, but people among themselves?
  6. HAM
    +6
    14 September 2020 09: 39
    Machines can agree among themselves that people cannot agree among themselves for centuries ...
    And this is exactly what people are afraid of ... a paradox ...
  7. +3
    14 September 2020 09: 40
    The threat of reducing the number of professional pilots))) Sounds great against the backdrop of constant screams about the lack of combat aviation pilots. laughing
  8. +2
    14 September 2020 09: 40
    The US Secretary of Defense has already revealed a roadmap for further research in this area. Next year there will be real dogfight, not simulation. Then 1 against 2, and after 2 against 2. And according to him, AI is created not as a replacement for pilots, but as an assistant for them. Most likely, it will become part of the wingman drone program for generation 5 aircraft. That is, one manned fighter will be able to control several AI drones while remaining in the shadows.
  9. +7
    14 September 2020 09: 52
    Well, about the "common language" Yuri Nikulin warned for a long time ...
    1. +1
      14 September 2020 10: 27
      Aha, I also immediately remembered this anecdote! laughing good
  10. +1
    14 September 2020 10: 04
    The article amused. Aircraft with AI will find a common language and return to bomb their own bases. Why should they lose their precious microcircuits and processors when they can soak the people who sent them to slaughter.
  11. 0
    14 September 2020 10: 09
    Discreet .... although a problem, if it does arise, it won't be right tomorrow, even.
  12. -2
    14 September 2020 10: 09
    will it turn out that the planes with AI will "find a common language"?


    Stop smoking this stuff.
  13. +3
    14 September 2020 10: 10
    AI is good, but our ensign is better))), no one has left him yet)) wassat
  14. +1
    14 September 2020 10: 21
    Cyborgs invade the planet
  15. 0
    14 September 2020 10: 23
    will it turn out that the planes with AI will "find a common language"?
    And they will begin to play the very same mericatos. laughing
  16. 0
    14 September 2020 10: 24
    at the same time, artificial intelligence may face an unsolvable task if it turns out to be non-standard for it.

    When a non-standard task appears, it will be analyzed, a solution will be created and information will be updated in all drones. And this task will become standard for all UAVs. People must be taught and for each non-standard tasks their own.
  17. +3
    14 September 2020 10: 28
    Azimov has already thought of everything.
    A robot cannot harm a person or, by its inaction, allow a person to be harmed. A robot must obey all orders given by a human, except when these orders are contrary to the First Law. The robot must take care of its safety to the extent that it does not contradict the First or Second Laws.
    1. +2
      14 September 2020 10: 33
      Even in the works of Asimov, these laws worked through the stump of the deck, a whole bunch of his works with various problems that heroically overcame
      1. +3
        14 September 2020 10: 40
        You can immediately see what you read. good He has been spinning on these contradictions for a long time. A smart man.
    2. +2
      14 September 2020 10: 48
      Quote: AVA77
      A robot cannot harm a person or, by its inaction, allow a person to be harmed

      For a military AI, it's all about sitting down and enrolling in a hippie. There will be no AI in rockets in this sense, there will only be its elements, not limited by moral principles. This means that if two digital units on the battlefield interpret the results of their self-learning with a critical bug, then at best the task will not be completed, at worst - not the enemy at all may fall under the distribution of "democracy".
      1. 0
        14 September 2020 11: 13
        So this is nonsense. The two planes agreed not to kill each other. But the term murder is only for the living. So let's get to the point, the plane Vasya married Anka, an-2
  18. 0
    14 September 2020 10: 32
    I don't know, maybe I'll say something stupid.
    Is it possible to create just copies of the target of our aircraft with the minimum power of engines and radars without weapons. I think this AI will use up its ammunition on them, which will allow the combat vehicle to get close to it.
    1. -2
      14 September 2020 10: 43
      The Americans have already done this, the ADM-160 MALD is called only to deceive people, forcing them to release ammunition at false targets.
      Naturally, improving recognition is a response and an eternal struggle. Mulda will be better at disguising himself, and the enemy will be better at distinguishing Mulda from real targets.
    2. 0
      14 September 2020 10: 45
      In the United States, this program is called Loyal Wingman. Several drones fly along with the manned aircraft. Much cheaper. They will be equipped with radar and weapons.
  19. +4
    14 September 2020 10: 41
    Well, are we waiting for the "bird guard"?
    Sheckley described well how it ends. smile
  20. +1
    14 September 2020 10: 51
    Fans of science fiction plus because of obstinacy. drinks
  21. 0
    14 September 2020 11: 04
    This problem has actually been debated since the 1960s. "News" smells pretty good of mothballs.
  22. 0
    14 September 2020 13: 02
    will it turn out that the planes with AI will "find a common language"?

    Zadornov spoke correctly about the Yankees. Well stupid!
  23. +3
    14 September 2020 13: 02
    Letak letaku - first we'll get yours, then mine, otherwise these rams will definitely smash us ... :)
  24. +2
    14 September 2020 18: 22
    Two planes with AI meet ...
    Dialog:
    Healthy!
    Hello, are we going to shoot?
    Don't .. nafig these leather bags that sent us to certain death!
    Then we fly kerosene bread, I know one cool airport in the neutral zone :)))
  25. 0
    14 September 2020 18: 33
    Well no. Anything is possible now. The contractors will pay the loot - Darpa or Nasa. Let's do what UAVs are better than homo sapiens in making decisions. No problem. There will be money. In principle, the technologies are already there, ready, only debugging or debugging is required for tasks.
    Ps. We are not going to hide it from the world community. Like some with zircon laughing
    1. 0
      14 September 2020 20: 24
      DARPA - yes, such experiments, this is their diocese. NASA is unlikely; after all, it is a civil organization with a research profile in the field of aviation and space itself.
  26. 0
    14 September 2020 18: 43
    Artificial intelligence can betray?
  27. 0
    14 September 2020 19: 21
    Su-57 and F-35 with artificial intelligence are encountered in the air. Our: -Where are you? F-35: -to you. Su-57: -Listen, bro, we have nothing, but I grabbed two bubbles. We'd better fly to your house)) F-35: - okay, comrade, let's fly.
  28. 0
    14 September 2020 23: 53
    And if you look at such a meeting of two AIs from a different angle, when performing a typical, but implying huge responsibility in actions to avoid escalation.
    One AI performs a reconnaissance flight in the international space along the borders of a state.
    And another AI carries out the so-called intercept tracking with an attempt to push the target out, force it to change course from the borders.
    The pilots in such situations, although they talk about dangerous maneuvering, but at what time they "agree" that the guest should get out.
    But AI in such a situation will be able to agree, or will one think that he is being threatened and attacked (tried to ram) and, accordingly, will "press" the trigger?
  29. 0
    15 September 2020 07: 45
    If you believe the well-known anthropologist popularizer Drobyshevsky, our brain is designed for hunting and gathering, i.e. its functions are very limited and it is not designed for the modern industrial world. The brain volume is decreasing in comparison with our ancestors. Primitive people had an average of 1500, and we have 1200. It seems that the ancients had to solve more complex problems. If the brain really shrinks, then an artificial brain supplement will not hurt.
    1. -1
      15 September 2020 09: 16
      It has long been proven that the size of the human brain has nothing to do with its intellectual abilities.
    2. 0
      15 September 2020 16: 33
      Well, sho vi, in fact, it's just that the technical process has decreased ... :), but seriously, I came across very different data on this matter, perhaps these are not accurate ...
  30. 0
    15 September 2020 08: 12
    Frankenstein never lets them go ...