Military Review

Tank panic. The Pentagon intends to equip armored vehicles with artificial intelligence

24

Source: qz.com


Contradictory ATLAS


Early last year, the United States military alarmed the world news on the development of the ATLAS (Advanced Targeting and Lethality Aided System) system, designed to take combat operations to a new level of automation. The initiative caused a mixed reaction among ordinary people and enlightened military experts. Much of the blame was on the developers (the military C5ISR center and the Defense Ministry's Arms Center), who, for the sake of the euphonious abbreviation ATLAS, included the terms "lethality" and "improved target designation" in the name. Scared stories about the rebellious robots, the Americans criticized the army initiative, they say, it contradicts the ethics of war. In particular, many referred to Pentagon Directive 3000.09, which prohibits the transfer of the right to open fire to an automated system. The integration of artificial intelligence and machine learning into ground vehicles, according to protesters, could lead to rash casualties among civilians and friendly troops. Among the critics were quite respectable scientists - for example, Stuart Russell, professor of computer science at the University of California at Berkeley.


Classic Tanks obsolete and requires early automation, according to the US military. Source: htstatic.imgsmail.ru

The developers quite reasonably explained that ATLAS has nothing to do with the hypothetical "killer robots" that humanity has been dreaming of since the first "Terminator". The system is based on target search algorithms using various sensor systems, selecting the most important ones and informing the operator about it. Now in the USA, the M113 armored personnel carrier with the integrated ATLAS system is being tested. To the operator of the weapon, artificial intelligence algorithms display not only the most dangerous targets on the screen, but also recommend the type of ammunition and even the number of shots for guaranteed defeat. According to the developers, the final decision on hitting the target remains with the shooter, and it is he who is responsible for the result. The main task of ATLAS in an armored version is to increase the speed of response to a potential threat - on average, a tank (BMP or armored personnel carrier) opens fire on a target with an automatic assistant three times faster. Naturally, an armored vehicle can work more effectively with group targets. In this case, the artificial intelligence promptly selects targets in the order of tank hazard, guides the weapon on its own and recommends the type of ammunition. Since the beginning of August, various types of armored vehicles with integrated ATLAS systems have been tested at the Aberdeen Proving Ground. Based on the results of the work, a decision will be made on military tests and even on the adoption of a similar weapons.

Tank panic. The Pentagon intends to equip armored vehicles with artificial intelligence
Testing ATLAS components at the Aberdeen Proving Grounds. The photo shows the landing compartment M113. Source: c4isrnet.com

Tanks are now one of the most conservative targets on the battlefield. Many of them have not fundamentally improved for decades, remaining in the 70-80s of the last century in terms of technical development. Often this inertia is associated with the widespread use of tanks in individual countries. In order to seriously modernize an armored army of many thousands, enormous resources are required. But the means of countering tanks are developing by leaps and bounds. An excellent example is the current conflict in Nagorno-Karabakh, when Turkish and Israeli drones are extremely effective against Armenian tanks. If we ignore casualties, calculating the price / performance ratio of such anti-tank weapons makes them simply the kings of the battlefield. Of course, ATLAS will not protect against air threats, but it can be a good tool for early warning of tank-hazardous targets such as ATGM crews or single grenade launchers.


These are the potential goals that the authors of the Project Convergence concept consider. Source: defensenews.com

The Pentagon considers the ATLAS system not as a single military structure, but as part of a large Project Convergence. This initiative should take troop awareness to the next level. Through machine learning, artificial intelligence and the unprecedented saturation of the battlefield with drones, the Americans hope to seriously increase the combat capability of their units. The key idea is not new - to connect all objects on the battlefield with a common information structure and to digitize the surrounding reality. So far, ATLAS is not fully included in Project Convergence due to the lack of data exchange skills with "neighbors", but in the future, the artificial brains of the tank will become common property. By the way, in the commercial for the project, China and Russia are designated as unambiguous military targets.

No trust in electronics


American troops already have a negative experience with armed robotic systems. In 2007, three small-sized tracked platforms SWORDS (short for Special Weapons Observation Reconnaissance Detection System), armed with M249 machine guns, were sent to Iraq. And although they were not fully autonomous vehicles, they managed to scare the soldiers with their periodic chaotic movements of the barrels of machine guns during patrolling the streets of Baghdad. To the Pentagon, this seemed a sign of unpredictability, and the tracked machine gunners were slowly sent home. In 2012, a directive was issued stating that automated and remotely controlled weapons systems should not fire on their own. Formally, ATLAS has been developed entirely within the framework of this provision, but there are no fewer questions about innovation. Some experts (in particular, Michael S. Horowitz, assistant professor of political science at the University of Pennsylvania) accuse the novelty of oversimplifying the process of hitting a target. In fact, this level of automation of search and target designation turns combat into an ordinary game like World of Tanks for the gunner. In the ATLAS guidance system, the priority target is highlighted in red, an alarm sounds and the technique, as it can, stimulates a person to open fire. In extreme combat conditions, there is little time to make a decision about shooting, and then the “smart robot” encourages you. As a result, the fighter simply does not have time to critically assess the situation, and without understanding it, he opens fire. It is necessary to evaluate how ATLAS correctly selected targets after shooting. To what extent is this approach ethical and does it comply with the notorious American directive? Microsoft, by the way, has already managed to fall under public condemnation for such a helmet-mounted target designation system for the military, up to and including a user boycott. In the United States, there has been a debate about robotic detection and guidance systems for many years. As an example, critics cite examples of errors of the autopilot system on public roads, which have already led to casualties. If even after driving millions of kilometers, the autopilots did not become 100% reliable, then what can we say about a completely fresh ATLAS, which can push tankers to shoot at an innocent person with a 120-mm projectile. Modern wars are now so bloody precisely because the military got the opportunity to kill remotely, hiding behind a reliable barrier. The example of the mentioned Nagorno-Karabakh once again confirms this truth. If the fighter is also deprived of the opportunity to critically assess the parameters of the target (this is exactly what ATLAS leads to), then the victims can become much more, and the blame for the murder can already be partially shifted to the machine.

And finally, the main argument against ATLAS-type systems among pacifist commentators was the virtual absence of a ban on the opening of automatic fire. Now only the ethical requirements of the Pentagon (which also have a lot of reservations) prohibit fully automating the murder process. With the introduction of ATLAS, there will be no technical barriers for this. Will the US Army be able to give up such a promising opportunity to further accelerate the response time to a threat and keep its soldiers from under attack?
Author:
24 comments
Information
Dear reader, to leave comments on the publication, you must to register.

I have an account? Sign in

  1. Sofa batyr
    Sofa batyr 5 November 2020 06: 10
    +4
    It is necessary to evaluate how ATLAS correctly selected targets after shooting.


    As soon as I read the name of the system - ATLAS, I immediately remembered an episode from "Taxi-2" with the operation of the system with the same name.

    1. gurzuf
      gurzuf 5 November 2020 13: 42
      +1
      And I remembered this https://youtu.be/f8AKH_wPmMc
      1. Insurgent
        Insurgent 5 November 2020 14: 14
        +3
        Quote: gurzuf
        And I remembered this https://youtu.be/f8AKH_wPmMc

        The camera confused the ball with the referee's bald head

        Wig him now, chtol give out, so as not to shine and not mislead the AI?
      2. Motorist
        Motorist 5 November 2020 21: 50
        +4
        Quote: gurzuf
        And I remembered this https://youtu.be/f8AKH_wPmMc

        And I remembered the cartoon about the automatic tank "Polygon" (1977), where the tank attacked those who are afraid. Look! good
  2. Ka-52
    Ka-52 5 November 2020 06: 49
    +4
    It is necessary to evaluate how ATLAS correctly selected targets after shooting.

    and what will change? American pilots in the days of Yugoslavia perfectly destroyed tractors with refugees, and in Iraq periodically they burned British Warriors laughing "Shoot and then ask" is an old Yankee principle
  3. Viktor Sergeev
    Viktor Sergeev 5 November 2020 07: 40
    0
    When the Americans get dumb, the army is full of scum, you have to rely on artificial intelligence. The trouble is that before the invention of AI, oh, how far is it, there is none and in the near future it won't be, since modern computer systems still suck.
    1. NDR-791
      NDR-791 5 November 2020 12: 40
      +4
      Let me tell you the most important secret of programmers. AI will not be created. For on the day when it is created, all programmers will be left without work. And they need it ??? wassat
      1. Thomas N.
        Thomas N. 5 November 2020 15: 01
        +2
        Quote: NDR-791
        Let me tell you the most important secret of programmers. AI will not be created. For on the day when it is created, all programmers will be left without work. And they need it ??? wassat

        It's not programmers who create AI, but the classic "mad scientist" like Emmett Brown from Back to the Future. laughing And his job loss certainly does not bother whether he needs fame or "bless" humanity. Idealists and dreamers are the most "dangerous" people, yes!
        1. Viktor Sergeev
          Viktor Sergeev 5 November 2020 19: 12
          +1
          AI needs a computer, in comparison to which modern super computers are simply stupid people in comparison with Einstein.
          1. My doctor
            My doctor 5 November 2020 20: 05
            0
            Quote: Victor Sergeev
            AI needs a computer, in comparison to which modern super computers are simply stupid people in comparison with Einstein.

            Whence such knowledge?
            1. Viktor Sergeev
              Viktor Sergeev 6 November 2020 09: 32
              0
              From a brain that surpasses any supercomputer in its abilities. What the brain of one person, and anybody else, is capable of, no modern computer can do. Until now, a computer is just a complex calculator not capable of real thinking, what the human brain can give out when using the subconscious, no computer will do, at least not yet. There is such a word "intuition" and it is she who defines real thinking, intelligence, the ability to compare, draw conclusions based on past experience, on the expected future.
              1. My doctor
                My doctor 6 November 2020 17: 06
                0
                Oh. completely by.
                The subconscious mind can be grouped with Etty, the Lochnest monster, the Bermuda Triangle, etc.
                Off-topic. I will shine with my knowledge, I will say that the problem of "brain and mind" is popular in philosophy. And what is connected with this we will bypass the side, but we will talk in a simple way. The brain is an organ of the central nervous system, like everything else in a living organism, is designed for survival. (Believe it or not, there are representatives of the living world who, at a certain period of their lives, eat up their brain as unnecessary when it has fulfilled its function). Let's get back to the topic. The brain copes well with its main task as well as the computer (software and hardware). The same supercomputer at large observatories processes in seconds as many terabytes of information as the human brain does not receive in a year.
                1. Viktor Sergeev
                  Viktor Sergeev 6 November 2020 17: 56
                  -1
                  All your trouble, as well as the author's, is that you absolutely do not understand what is the processing of information by the brain and the difference between a calculator and the human brain. First, determine what is the size of the information of one human dream, which occurs in a split second, then try to determine how much information you need to process just in order to go. But the most important thing: the intellect should not process and count, but reason, show creativity, have the ability to foresee a situation, even when it hasn’t met such a situation, the human brain does it in a microsecond (intuition), and a computer cannot do it, in principle, because of lack of information. Do you at least imagine how a person sees a picture and what work does the brain do at this time?
                  You know, in our military school, one pepper blurted out about the same thing as you: a computer has more memory than a human brain, it was funny.
                  1. My doctor
                    My doctor 6 November 2020 19: 35
                    0
                    Quote: Victor Sergeev
                    First, determine what is the size of the information of one human dream, which occurs in a split second, then try to determine how much information you need to process just in order to go.

                    With all due respect, but the information in a dream is beyond my perception of the world. Information in a dream is not even Etti with the Lochness monster, it is a UFO and parapsychology, since information, by definition, is transmitted information.
                    Quote: Victor Sergeev
                    But the most important thing: the intellect should not process and count, but reason, show creativity, have the ability to foresee a situation, even when it hasn’t met such a situation, the human brain does it in a microsecond (intuition), and the computer cannot do it in principle because of lack of information.

                    There are games with a set of clear rules, and if this set of rules is loaded into a certain algorithm of neural networks, then after a few hours of "self-learning" this algorithm will beat the best player among people. Not necessarily chess or go, but any other.
                    Quote: Victor Sergeev
                    Do you at least imagine how a person sees a picture and what work does the brain do at this time?

                    Until now, some believe that a person perceives and processes information in categories, and if the information he receives is previously familiar to him and he can determine it, then there is no problem. In fact, an adult in 99.999% does not see, but recognizes what he previously learned. A person cannot perceive more than a certain threshold, as an example, the amount of short-term memory seven elements , in connection with which there is a bunch of inattention blindness phenomena.
                    Quote: Victor Sergeev
                    You know, in our military school, one pepper blurted out about the same thing as you: a computer has more memory than a human brain, it was funny.

                    Pechalka crying
                    1. Viktor Sergeev
                      Viktor Sergeev 6 November 2020 19: 57
                      -1
                      And will your computer beat a person in a situation when you do not need to calculate mathematical formulas, but in a situation when half of the information is not known and you need to fill it up with intuition? The computer is not capable of learning, it only calculates the solutions and that's it, for this it needs the source code. And now about sleep: at this moment, a person's consciousness turns off and the subconscious is cut in, a person sees dozens, if not hundreds of dreams in a second, each of which is the size of a normal movie.
                      I still advise you to study the topic of how a person sees and folds a picture, you will like it, a computer cannot do it. The brain simultaneously does a hundred things, controls the body (a bunch of organs), processes the incoming information. Just in order to walk (smoothly, clearly), the brain controls a dozen muscles with an accuracy of a micron, spending a scanty fraction of its capabilities, computer scientists cannot create something like that.
                      Let the robotic system enter the battlefield and it will lose, because at the same time a mountain of unpredictable conditions will arise, and the brain will take this into account, adapt instantly (as they say, first a person does, and then realizes what he did, on the machine), and the computer will freeze or start making terrible mistakes , simplifying the task and following the pattern.
                      1. My doctor
                        My doctor 6 November 2020 20: 35
                        0
                        Quote: Victor Sergeev
                        will your computer beat a person in a situation when you do not need to calculate mathematical formulas, but in a situation when half of the information is not known and you need to fill it up with intuition?

                        Uh ... Are you saying that a person is more likely to make the right decision with only half of the information they need? Then this is not half, but sufficient information to make the right decision.
                        Quote: Victor Sergeev
                        The computer is not capable of learning, it only calculates the solutions and that's it, for this it needs the source code.

                        No. Just as I wrote, and moreover, the "self-learning" program is able to beat the program that calculates according to predefined algorithms and with databases of game situations.
                        Quote: Victor Sergeev
                        And now about sleep: at this moment, a person's consciousness turns off and the subconscious is cut in,

                        As I read and as far as I understand. In science, in principle, the word "subconsciousness" is not used, and any use of it determines the seriousness of a scientific article.
                        Quote: Victor Sergeev
                        a person sees dozens, if not hundreds of dreams in a second, each of which is the size of a normal movie.

                        There is such a myth - a person in a critical situation perceives in an accelerated mode, for him time seems to slow down. We decided to check the experiments. In experiments, people really subjectively confirmed this. Until they were asked during the experiment to distinguish something on the screen, changing at a speed possible for perception in the test state. Result: the hypothesis was not confirmed.
                        Quote: Victor Sergeev
                        I still advise you to study the topic of how a person sees and folds a picture, you will like it, a computer cannot do it.

                        It would be interesting to get acquainted with your sources.
                      2. Viktor Sergeev
                        Viktor Sergeev 7 November 2020 12: 57
                        -1
                        With a lack of information, a person will make a more correct decision and begin to act in its execution, simultaneously changing actions as the situation changes, the computer will stupidly freeze, and as soon as unpredictable circumstances go, which the programmer did not foresee, the computer will start giving out such that a complete end will come. Comp is good in simple situations, standard, but war is a lot of unpredictable, especially if the war of equal opponents.
  4. Sckepsis
    Sckepsis 6 November 2020 22: 45
    0

    The whole trouble is that before the invention of AI, oh how far

    But this is not a problem, but salvation.
  • Vasily Kiryanov
    Vasily Kiryanov 5 November 2020 14: 10
    +2
    American tankers really need artificial intelligence. As a prosthesis.
  • Undecim
    Undecim 5 November 2020 14: 12
    +1
    Now in the USA, the M113 armored personnel carrier with the integrated ATLAS system is being tested.
    Not only.

    The ALAS-MC tower is installed on the General Dynamics Griffin chassis for testing the ATLAS system.
  • Thomas N.
    Thomas N. 5 November 2020 14: 54
    0
    "The US is currently testing the M113 armored personnel carrier with an integrated ATLAS system."
    Judging by the civilian equipment in the photo with the caption "Testing of ATLAS components at the Aberdeen Proving Grounds. The photo shows the M113 troop compartment. Source: c4isrnet.com" This is just a test bench assembled in a convenient spacious troop compartment of the M113 APC. I think the M113 will not be equipped with this system.
  • Lad
    Lad 5 November 2020 16: 50
    +3
    it is prohibited to transfer the right to open fire to automated systems


    Since when is this banned? Always allowed. Many civilians and animals perished at the same time. But this did not stop anyone. Moreover, these systems used to be much dumber than modern ones. Take, for example, the same mines. The mine works on a soldier, a civilian, and an animal. On a warship and on a civilian. She does not care. And no one requires the operator to select the target for the mine through a video camera. Mina is no different from a shooting robot - it kills in the same way. And then something shouted.
  • Slon_on
    Slon_on 6 November 2020 09: 25
    0
    Yeah, the AI ​​will be stuck in the Abrams, and the shells will still be thrown into the cannon by nigraJon?
  • Maksim_ok
    Maksim_ok 14 November 2020 03: 54
    0
    "Among the critics were quite respectable scientists - for example, Stuart Russell, professor of computer science at the University of California at Berkeley." This university is a well-known breeding ground for half-mad leftists.

    "Many of them have not fundamentally improved for decades, remaining in the 70-80s of the last century in terms of technical development." It depends on what kind of tanks. This is true of the T-72B (1985) and T-72BA (1999), but Take the Leopard A7 and the base Leopard-2, isn't there any difference between them? If you think at the elementary school level, then no. There is a turret with a cannon, as well as tracks. The name is almost the same. But in fact these are machines with different capabilities. First of all, on the OMS and the information component. Satellite navigation, TIUS, combat control system and network capabilities, and then a new gun, ammunition and sights.

    As for ATLAS, such a system is definitely needed.