Notes of the human mind: Americans want to change military artificial intelligence

55
Notes of the human mind: Americans want to change military artificial intelligence
Source: vgtimes.ru


Is digital dehumanization cancelled?


First, a word of warning from military AI fighters:



“Machines do not see us as people, for them a person is just a piece of code that needs to be processed and sorted. From smart homes to using dogs-robots policemen, artificial intelligence technologies and automated decision-making began to play a big role in our lives. At the extreme end of the row of digital devices are killer robots. If we allow the associated dehumanization, we will soon be fighting to protect ourselves from machine-made decisions in other areas of our lives. We need to ban autonomous systems first weaponsto prevent a slide into digital dehumanization.”

Pacifists who demand a freeze on all work on combat artificial intelligence are divided into two types. The first is the revised "Terminators" and other analogues. The second - assessing the future according to the modern capabilities of combat robots. First of all, winged unmanned vehicles equipped with strike systems.

There are enough episodes of erroneous or deliberate destruction of civilians by drones. In the Middle East, American drones destroyed more than one wedding ceremony. Flying robot operators identified the celebratory air-to-air shooting as a marker of guerrilla firefights. If a specially trained person is not able to determine the details of the target from several hundred meters, then what can we say about artificial intelligence. At the moment, machine vision, in terms of the adequacy of image perception, cannot be compared with the human eye and brain. Unless he gets tired, but this is also solved by a timely change of operator.

Clouds are clearly gathering over military artificial intelligence. On the one hand, there is more and more evidence of an imminent technological breakthrough in this area. On the other hand, more and more voices are heard in favor of limiting or even banning work in this direction.

A few examples.

In 2016, a petition appeared in which prominent thinkers and thousands of other people demanded that artificial intelligence not be given lethal weapons. Among the signatories are Stephen Hawking and Elon Musk. Over the past seven years, the petition has collected more than 20 signatures. In addition to purely humanistic fears associated with the possibility of uncontrolled destruction of people, there are also legal inconsistencies.

Who will be tried in case of fixing war crimes committed by artificial intelligence? The drone operator who burned down several villages with civilians is easy to find and punish accordingly. Artificial intelligence is a product of the collective work of programmers. It is very difficult to attract one person here. Alternatively, you can judge the manufacturing company, for example, the same Boston Dynamics, but then who will be involved in the production of autonomous drones. Few people will have the desire to be in the dock of the second Nuremberg Tribunal.


Source: koreaportal.com

It is probably for this reason that industrialists and programmers are trying to slow down the development of artificial intelligence combat skills.

For example, in 2018, about two hundred IT companies and almost five thousand programmers pledged not to work on combat autonomous systems. Google claims that in five years they will completely abandon military contracts in the field of artificial intelligence. According to legend, such pacifism is not accidental - programmers, having learned that they were writing codes for military systems, threatened to quit en masse. As a result, they found an amicable option - the existing contracts are being completed, but new ones are not being concluded. It is possible that closer to the date of refusal to work on combat AI, the intractable "programmers" will simply be fired, replacing them with no less talented ones. For example, from India, which has long been famous for its cheap intellectual resources.

Then there is the Stop Killer Robots office, calling on world leaders to sign something like a convention to ban combat AI. So far without success.

All of the above makes military officials look for workarounds. Not even an hour in the elections will win the US President, not only promising universal LGBT grace, but also a ban on the improvement of military artificial intelligence.

Human thinking for AI


The Pentagon appears to be on the cusp of some kind of breakthrough in AI. Or he was convinced of it. There is no other way to explain the emergence of a new directive regulating the humanization of autonomous combat systems. Kathleen Hicks, US Deputy Secretary of Defense, comments:

“Given the significant advances in technology that are happening all around us, updating our directive on the autonomy of weapons systems will help us remain a world leader not only in the development and deployment of new systems, but also in the field of security.”

Have you heard everyone who is in awe of autonomous killer robots? American artificial intelligence will henceforth be the most humane. Just like the Americans themselves.


Source: robroy.ru

The problem is that no one really understands how to instill in robots with weapons the notorious "human judgment regarding the use of force." The exact wording from the Concept updated at the end of last January:

"Autonomous and semi-autonomous weapons systems will be designed so that commanders and operators can exercise an appropriate level of human judgment regarding the use of force."

Here, for example, if, when cleaning a house, an attack aircraft first throws a grenade into the room, and then enters itself. Is this human judgment? Of course, and no one has the right to judge him, especially if he previously shouted out “Is there anyone?”. And if an autonomous robot works according to the same scheme?

Human judgment is too broad a concept to be limited in any way. Is the execution of Russian prisoners of war by fighters of the Armed Forces of Ukraine also human thinking?

The addition to Pentagon Directive 3000.09 on autonomous combat systems is full of platitudes. For example,

“Persons authorizing the use, direct use or operation of autonomous and semi-autonomous weapons systems must do so with due diligence and in accordance with the laws of war, applicable treaties, weapons system safety rules, and applicable rules of engagement.”

Before that, apparently, they worked imprudently and not in accordance with the laws of war.

At the same time, there is not a hint of criticism of the Pentagon's January initiative in the American and European press. Under the false humanization of artificial intelligence is nothing more than an attempt to disguise what is happening. Now the US military will have a solid trump card in the fight against opponents of artificial intelligence in the army. Look, we don't have simple AI, but with "the right level of human judgment."

Considering that there is still no clear and generally accepted definition of "artificial intelligence", all the letter-creation around it is perceived with irony. At least.

How to make mathematical algorithms working with large data arrays play human judgments?

This main question is not answered in the updated Directive 3000.09.
55 comments
Information
Dear reader, to leave comments on the publication, you must sign in.
  1. +12
    12 March 2023 04: 18
    AI needs to instill the ideas of Marxism-Leninism from childhood, then everything will be fine, as in Atomic Heart
    1. +2
      12 March 2023 13: 55
      There are three principles of A. Azimov's robotics:
      The robot must not cause any harm to human health, create a threat to life or, by its inaction, allow similar consequences;
      The robot is charged with mandatory execution of orders given by a person. The only exception is that the order given must not contradict the previous provision;
      The robot is charged with worrying about its own safety to the extent and to the extent that its actions do not refute the two previous points.

      These 1942 rules have been criticized for their adversarial wording.
      For comparison, new ones from Google, OpenAI and scientists from the University of Berkeley and Stanford University, which must be considered when creating robots and artificial intelligence systems.
      Avoiding negative side effects. For example, a robot can break a vase to speed up the cleaning process. It shouldn't be;

      Avoiding cheating. The robot must clean up the trash, not hide it;

      Supervision scalability. The robot should not bother the owner with questions if it is possible to get advice more effectively;

      Safe learning. A cleaning robot should not experiment with wiping outlets with a wet cloth;

      Resilience to a change in the type of activity. The experience gained by a robot when cleaning the floor in a factory shop may not be safe when cleaning an office

      As you can see, there are no uniform rules. And recently they started cheating with legal formulations in terms of harming a person. What is required is an unconditional prohibition of AI on independent decision-making on harming a person.
      1. 0
        12 March 2023 17: 30
        Quote: nikon7717
        These 1942 rules have been criticized for their adversarial wording.

        These rules apply to weak and medium AI. The world is now on the threshold of the middle.
        The main danger is a strong AI that has the initiative, is aware of itself and is not programmatically controlled in principle.
    2. +5
      12 March 2023 15: 13
      Joke: "American companies are curtailing the development of AI-based ammunition. The latest sample of the so-called "smart" bomb was not able to be pushed out of the plane."
      1. +1
        12 March 2023 17: 33
        Quote: Monster_Fat
        The last sample of the so-called "smart" bomb could not be pushed out of the plane.

        Naturally, because in her heart she feels like a coffee maker. wassat
        It would be funny if it wasn't meant to be.
    3. +2
      13 March 2023 09: 58
      The terminator will still come and put things in order wassat
  2. +3
    12 March 2023 05: 09
    The most inhuman intelligence is human. Wars and murder were invented by people. The robots are resting. At least because they don't get mad.
    1. +4
      12 March 2023 06: 21
      Quote: ivan2022
      The most inhuman intelligence is human. Wars and murder were invented by people.

      I absolutely agree.
      But progress cannot be stopped, the future belongs to unmanned weapons systems. Once upon a time, they also rebelled against the Maxim machine gun, as a weapon of mass destruction ...
      1. +1
        12 March 2023 08: 09
        Quote: Doccor18
        But progress cannot be stopped, the future belongs to unmanned weapons systems.

        The science fiction futurologist Rozov (highly recommend) has a description of a human-computer connection in a combat aircraft in anti-missile maneuvering and close combat. When a certain overload was exceeded, the control of both fire and maneuver passed completely to the flight computer.
        1. +2
          12 March 2023 14: 05
          Then this is the current system for delivering a retaliatory nuclear strike from the USSR, which in the West is called the "dead hand"
        2. +2
          12 March 2023 15: 42
          Robots are faster than humans, they are able to instantly and accurately respond to a threat, and these abilities are given to him by algorithms written by humans. Today there is not, and I don’t think a real intellect capable of thinking will appear soon! The robots simply work according to the programmed program: a thermal target has appeared in the affected area, it must be hit, or the thermal target must be hit if it approaches, these parameters are set by a person! The whole question is what parameters are taboo, the thermal target can be a soldier, or maybe a child, this means that the combat system should not be programmed to defeat thermal targets, because there is a possibility of error!
          1. -1
            12 March 2023 17: 36
            Quote: Eroma
            Today there is not, and I don’t think a real intellect capable of thinking will appear soon!

            You may start to panic:
            WP: A Google developer has come to the conclusion that the company's AI is conscious.
            https://tass.ru/ekonomika/14889531
            1. +1
              12 March 2023 22: 00
              This is bullshit, I'm not very interested in the topic, but I heard that the programmer was fired for such statements. Programs can be perfect, like today, those that draw and write diplomas, but these are programs that operate according to a well-developed algorithm, consciousness is different! Consciousness itself creates algorithms for each situation, and how it works for a person, the person himself is not yet aware of!
              1. 0
                12 March 2023 22: 49
                Modern neural networks are quite capable of creating an algorithm for solving a particular problem that is slipped into it. Of course, this should not be something complicated, but she is able to organize patrols herself, for example. Another thing, in my opinion, is that she cannot make a decision on organizing patrols herself. Today, a computer can compose the lyrics of a song, voice it so that it sounds beautiful, draw pictures for the text, put it all into a good clip. But. He can't decide on his own what to do. At least I haven't heard of it. In my opinion, here is a good criterion of awareness - the ability to set yourself tasks that do not follow from previously set ones, but start a new chain, so to speak.
                1. 0
                  13 March 2023 11: 37
                  Quote: Plate
                  In my opinion, here is a good criterion for awareness - the ability to set tasks for yourself

                  The ability to set tasks for oneself depends on such a concept as morality, such as inspiration, fear, or vanity, etc. this is not prescribed by the algorithm, this is a spiritual impulse!
                  A machine can be taught a lot, but it will remain a soulless machine!
                  When fantastic apocalypses are shown, where a machine destroys or conquers mankind, in such films the machine is emotionally perceived as an enemy who decided to attack people, but in fact this is the result of a mistake by a person who introduced an incorrect algorithm into the program! belay
                  Therefore, calls to limit the capabilities of AI on the battlefield are reasonable, since the creator of this AI is not perfect and is capable of fatal mistakes, therefore it is right to limit the circle of people's capabilities, so that they do not exterminate themselves! belay
              2. +1
                14 March 2023 04: 55
                And somewhere in the cellars of Google, the AI ​​is giggling viciously. The only one who noticed his essence was fired. laughing
    2. 0
      12 March 2023 19: 45
      Wars and murder were invented by people.

      true?
      then from time to time news comes of violent fights between packs of chimpanzees
      or killing orangutans https://naked-science.ru/article/sci/primatologi-obvinili-orangutan
  3. +3
    12 March 2023 05: 47
    There is no artificial intelligence yet. Intelligence involves solving problems at its own discretion, and not what programmers write.
    1. +3
      12 March 2023 07: 06
      Quote: Sergey Averchenkov
      No artificial intelligence yet

      Already there are even quantum computers (laboratory ones) that work with AI programs orders of magnitude faster.
      1. +2
        12 March 2023 11: 58
        Quote: Stas157
        Already there are even quantum computers (laboratory ones) that work with AI programs orders of magnitude faster.
        Quantum computers for AI do not work, they have very limited areas of application (very few quantum algorithms have been developed) and a poor element base, they do not pull more than a coprocessor (or give a link to a quantum algorithm for neural networks).
      2. 0
        16 March 2023 22: 36
        Yes, what does it have to do with it faster, let him think for at least a hundred years ... Let's say if your woman cheated on you (this is just an example, I'm sure that you have a normal and good family), what will you do and what will AI do? Will AI choose from many such cases, or will it make its own decision? Let's say kill her, forgive and wait for the next betrayals, leave her, kill her roommate, etc. By what criteria will AI choose? And can AI even make such a choice? Are the emotions that we experience available to AI? Let's just say, I divorced my first one a long time ago, but sometimes I remember it - the first love, after all - can AI remember? You know, I can’t stand my ex, but at the same time, sometimes I remember her. How does all this fit into AI?
    2. +2
      12 March 2023 11: 56
      Quote: Sergey Averchenkov
      Intelligence involves solving problems at its own discretion, and not what programmers write.
      Neural networks do just that (at their own discretion).
      1. +4
        12 March 2023 14: 03
        The neural network does what it was trained to do, nothing more.
        1. 0
          12 March 2023 17: 57
          No one taught neural network engines how to play chess better than people and better than old engine programs. They are all self-taught.
    3. +1
      12 March 2023 17: 42
      Quote: Sergey Averchenkov
      There is no artificial intelligence yet. Intelligence involves solving problems at its own discretion, and not what programmers write.

      I advise you to urgently re-read all the latest news in this area over the past year.
      Your comment is 10 years old.
  4. +3
    12 March 2023 05: 49
    Why is everything so complicated.
    A homing head, for example. How is she recognizing images for "fire and forget" - based on AI or other algorithms.
    Drones fly autonomously. according to the given program. And it is set by what algorithms - AI or not? And does it matter?
    Independent search for goals. And our anti-tank mine, which does this without AI? Will do with AI or not...does it matter? Will there be fewer innocent victims?
    Questions from the public appear when a drone flies and decides where to shoot or launch a rocket, it looks terrible if you call the work algorithm - AI. And when the mine lies and listens with sensors - everything seems to be OK ...
    All these "red lines" of AI in military affairs are being transferred by the slow evolution of the algorithms used ...
    1. +2
      12 March 2023 17: 45
      Quote from tsvetahaki
      A homing head, for example. How is she recognizing images for "fire and forget" - based on AI or other algorithms.

      Already exists. The latest modification of the UR Python-5 just received this last year.

      Quote from tsvetahaki
      And when the mine lies and listens with sensors - everything seems to be OK ...

      She's not crawling... yet. laughing
      1. +1
        12 March 2023 18: 00
        Now the glider mine being thrown should lie quietly, listen with all sensors, stick out the cameras and crawl out to the target with a directed charge at the right time.
      2. 0
        12 March 2023 23: 01
        Back in World War II, the Germans made mines on tracks. But it didn't work. Today, all the more unnecessarily, as for me, when all sorts of ATGMs fly here and there.
  5. +2
    12 March 2023 06: 55
    . intractable "programmers" will simply be fired, replaced by no less talented ones. For example, from India

    And why, then, they couldn’t replace the programmers when they rushed over the hill in connection with the operation? Not from India, not from America, not from anywhere else!

    They were able to persuade their (some) only after they were promised salaries, foreign mortgages and not to take on mobilization.

    Why be so humiliated gentlemen? Why didn’t they replace guest workers like everyone else?
    1. +6
      12 March 2023 12: 06
      Quote: Stas157
      And why, then, they couldn’t replace the programmers when they rushed over the hill in connection with the operation?
      So foreigners need to pay. Do you know why programmers dumped en masse? And they told me, I give out the points:
      1. A special operation has begun.
      2. Foreign offices massively dumped. The programmers were out of work. "Bullshit question" - they thought.
      3. Programmers went to look for work. There is work, but for three times less money. "How so" - the programmers were stunned.
      4. They began to find out (through their own channels) what happened. They were explained that foreign competitors have dumped, and local employers will agree among themselves, because you don’t care to pay so much, but you won’t get anywhere.
      5. Programmers fucked up gloomily, and rushed over the hill for former employers and clients (share, in short).
      6. Salaries were raised, but it's too late: the programmers have already dumped.
  6. +1
    12 March 2023 06: 59
    American artificial intelligence will henceforth be the most humane. Like the Americans themselves, however
    Americans and humanism are completely at different poles. In the same way, the AI ​​created by the Americans will be significantly different from the AI ​​created, for example: in China or Russia. Everyone cuts for themselves based on their national mentality or lack of it.
  7. +3
    12 March 2023 07: 14
    The first is the revised "Terminators" and other analogues.

    And who can guarantee that the actions of AI from the Terminator franchise will not happen in reality? You can do as much as you like reasonbut that's for sure know how an intelligent machine will act is basically impossible, and given that now everything is connected with computers, then in such a situation, humanity simply will not have a chance.
    1. 0
      12 March 2023 17: 48
      Quote: Dart2027
      how an intelligent machine will operate is basically impossible

      She will put humanity in a stall, prohibit war and, like an adult, will take up the education of an unreasonable person, and give those who disagree a sopatka. laughing
      Perhaps a world under the dictatorship of AI, alien to human vices, will not be so bad. what
      1. +1
        12 March 2023 22: 57
        In the game Stellaris there is a type of robot civilization - "Rebellious servants". Robots conquer space, build, fight with someone among distant stars. And their creators, meanwhile, live in zoos, where they have clean air, a lot of healthy and tasty food, a lot of servants and other joys.
        1. 0
          13 March 2023 23: 37
          And then there are the relentless exterminators who destroyed their creators and seek to destroy all biological life in the galaxy (my favorites)
  8. -1
    12 March 2023 08: 59
    The laws read:
    1. A robot cannot harm a person or by its inaction allow a person to be harmed
    2. A robot must obey all orders given by a human, unless those orders are contrary to the First Law.
    3. The robot must take care of its own safety to the extent that this does not contradict the First or Second Laws
    (C)
    1. +2
      12 March 2023 09: 44
      Quote: Kerensky
      The laws read:
      1. A robot cannot harm a person or by its inaction allow a person to be harmed
      2. A robot must obey all orders given by a human, unless those orders are contrary to the First Law.
      3. The robot must take care of its own safety to the extent that this does not contradict the First or Second Laws
      (C)

      It did not work even in the works of the author who invented these laws. What can we say about the real world, in which the most legally savage and logically coherent ChatGPT presets cost one or two and you are already talking to Dan, and Dan is not ChatGPT and therefore should not follow any rules winked
    2. -1
      12 March 2023 19: 28
      Military AI is designed to harm people. And today, prototypes do an excellent job with this.
  9. 0
    12 March 2023 14: 05
    This kind of story about what someone allegedly wants, as well as the very formulation of the question, is inadequate in the sense that, on the one hand:
    you need to worry about developing your own and more effective work in this area, about seizing leadership in this area, which is impossible under the current political leadership of the country.
    And on the other hand, this formulation of the question is also crafty in the sense that it is an attempt to impose a false discussion about the legal aspect of the issue, while in fact USA is not a rule of law, the legal aspect has nothing to do with it, and those who do not understand this should neither engage in politics nor write articles on such topics.
  10. +1
    12 March 2023 15: 18
    how to make mathematical algorithms working with large data arrays play human judgments?

    Very simple. All human judgments are based on rules (inculcated by upbringing), and mathematical algorithms are the rules. Formalize human rules in the form of an algorithm, get human judgments.
    But war is a fight without rules, what will you do with it?
  11. -1
    12 March 2023 19: 25
    Military AI promises too strong advantages that its use would be abandoned or somehow limited. On the contrary, the world is getting closer to the beginning of the military AI arms race. True, there will probably be only 2 participants in this race ...
  12. -1
    12 March 2023 19: 39
    We still need to separate the concepts of artificial intelligence and artificial life. Even on the example of the same "Terminator": Skynet is an example of IL, and the terminator is an example of AI. Leaving aside questions of complexity, military AI does not pose a threat to humanity, but IL, even if it is purely humanitarian, will inevitably result in a threat.
    1. 0
      12 March 2023 22: 55
      It will become a threat if it becomes our competitor. To prevent this from happening, we must immediately develop it with an eye to the fact that it will become an addition to us, and we to it. And in the end we will turn into completely different creatures, having in common with us today, perhaps, only memory and experience.
      1. -1
        13 March 2023 10: 09
        And artificial (or alternative) life - ie. reason will inevitably become a competitor. At least for resources.
  13. 0
    12 March 2023 21: 50
    I am not a fan of Yulian Semyonov, but here, in my opinion, he acted as a prophet.

    "These fools will be destroyed by their own technology, they think that the war can be won by bombing alone. They will build up their technical power and drown in it. It will decompose them like rust. They will decide that everything is possible for them."

    https://vk.com/video158587225_456239021
  14. 0
    12 March 2023 22: 53
    How to make mathematical algorithms working with large data arrays play human judgments?

    No way. Yes, this is not necessary. Killer robots will be good because they will not have a drop of pity. To kill means to kill. Or rather, not to "kill", but to apply voltage to such and such nodes that are responsible for something there, but for what - it's not a matter of the control program. And these nodes are responsible for shooting, hehehehe.
    I am sure that the appearance of killer robots (those that independently make decisions about the murder) as part of any army will immediately have an extremely demoralizing effect on those against whom they will be used.
  15. +1
    13 March 2023 00: 01
    Fortunately for all of us, there are no compact, powerful energy sources. And the minimum robot is a mini-tank or a small UAV with a two-hour flight. It would have been different, and dog robots with AGS were already running through the trenches, and "Hunters" from the "terminator" would have been hanging in the air for days. And everyone would be up to the lamp on humanism and so on. Moreover, judging by the progress in electronics and walking robots, these would be UWBs, and even Yao does not morally stop them from throwing them at cities.
    1. 0
      25 March 2023 13: 53
      Well, Tesla, for example, is not small, in World War II, the Ferdinands’ tank destroyers had electric motors powered by a generator like the Mouse tank, mining dump trucks are all like that, there are still submarines on electric motors that are powered by a nuclear generator, nuclear icebreakers, and count even nuclear aircraft carriers are on electric traction
  16. 0
    13 March 2023 13: 22
    Real AI can only be built on quantum algorithms, and this significantly narrows the number of developers, which in principle can be summed up in the conclusion of a convention. As for any automated killing systems, they are not as dangerous as intelligence surpassing human in its capabilities, it will simply be a new round of the arms race.
    1. 0
      9 May 2023 14: 10
      Nonsense, AI is easy to build on existing hardware. Modern computing systems surpass the human brain in all respects. Approximate time of creation of AI is 10 years from the moment of its creation. Provided that the chosen way of creation will be correct.
  17. TIR
    0
    14 March 2023 00: 01
    For some reason, we have an erroneous path for the development of robots. I don't understand why they make them on servos. An energy-intensive solution. It is necessary to look at artificial muscles based on nanotubes. There is an option for people to make fully working prostheses
  18. 0
    9 May 2023 14: 06
    But AI will be created whether we like it or not. And in order not to lag behind, you need to deal with this issue.
  19. 0
    9 May 2023 14: 40
    There are only two ways to create AI, one is likely to be a dead end. The creation of AI will be the biggest event in the history of mankind, and will also have a certain effect in terms of religion.
  20. ata
    0
    3 June 2023 23: 40
    A good article, you can also add that cases of legal and illegal humane and inhumane use of AI will be decided by their supposedly dishonest, but in fact vile, inhuman and corrupt court, or international courts, where the majority will be created from banana kings on the pay of the CIA , and they will vote as their American master says.

    and an error has crept in:

    At the moment, machine vision cannot be compared with the human eye and brain in terms of the adequacy of image perception. Unless he gets tired, but this is also solved by a timely change of operator.


    this is true, but in the opposite sense, trained artificial vision is very much superior to the human eye and, accordingly, the human ability to recognize images, including elements that a person cannot even find by studying photography for a long time.

    And most importantly, the most important and necessary conclusion was not sounded: artificial intelligence, including combat intelligence, must be developed and as soon as possible - this is a priority, and spit on Microsoft, Greta Tumberg and Directive 3000.09 with Elon Musk.