Skynet is coming: the Americans have finished playing with artificial intelligence

74
Skynet is coming: the Americans have finished playing with artificial intelligence
Stealthy XQ-58A Valkyrie


Hamilton simulation


On May 24, at the defense conference of the Royal Aerospace Society Future Combat Air & Space Capabilities Summit in London, US Air Force Colonel Tucker Hamilton told history about the soullessness of artificial intelligence.



During the simulation of the battle, the air strike control system drone went against her operator and destroyed him. Naturally, virtually. According to Hamilton himself, the machine received bonuses for destroyed objects, but the operator did not always confirm the work on targets. For this he paid. To solve the problem, the drone sent a rocket to the control center. In all likelihood, it was an experimental Stealthy XQ-58A Valkyrie drone, and it worked on ground-based air defense systems.

A feature of the machine is the ability to work autonomously without communication with the operator. What, in fact, artificial intelligence took advantage of, virtually eliminating its remote driver. In response to this, system administrators banned such things to the machine, but here the AI ​​was not at a loss - it destroyed the relay tower and again went on autonomous navigation.


Colonel Hamilton is still young to talk in international forums. Source: thedrive.com

Hamilton's story instantly spread around the world. Opinions were divided polarly - who considered that this was another chatter of an incompetent warrior, someone saw the birth of the notorious Skynet here. A little more and cyborgs will conquer the world, and people will be shot for bonus points. There was a lot of smoke from the colonel's statements, but the truth, as usual, is somewhere in between.

Anne Strefanek, spokeswoman for the Air Force headquarters at the Pentagon, added uncertainty, turning Hamilton's words into an anecdote. For The War Zone, she spoke:

"It was a hypothetical thought experiment, not a simulation."

And in general, the words of the colonel were taken out of context, not so understood and more like a curiosity. Nobody expected a different reaction from the Pentagon - a lot of noise around the event arose, which threatened with serious consequences for the entire program. Wow, artificial intelligence, it turns out, is devoid of morality. Although it operates according to human logic.

In early June, Tucker Hamilton himself tried to disavow his words at a conference in London:

“We have never done this experiment… Even though this is a hypothetical example, it illustrates the real problems with AI capabilities, which is why the Air Force is committed to the ethical development of AI.”

It would seem that the issue is closed, and the audience can disperse. But it's too early.

Food for thought


To begin with, let's deal with the very term "artificial intelligence", which everyone knows about, but few can even give a rough definition. We will use the formulation of the 2008 International Terminological Dictionary, in which AI:

"Field of knowledge concerned with the development of technologies such that the actions of computing systems resemble intelligent behavior, including human behavior."

That is, this is a generally accepted definition in the West.

Did the machine behave like when it decided to “calm down” its operator and then crush the relay tower? Of course, it seemed like a properly motivated killer is capable of more than that. If you delve into the classification, you can find a specific type of AI - the so-called adaptive (Adaptive AI), "implying the ability of the system to adapt to new conditions, acquiring knowledge that was not laid down during creation."

Theoretically, there is nothing surprising in the act of the “brains” of the Stealthy XQ-58A Valkyrie during the experiment. As Hamilton rightly noted in his report, the program initially did not even introduce restrictions on the destruction of its operator - the machine learned everything itself. And when it was directly forbidden to beat their own, the artificial intelligence adapted once again and cut down the communications tower.

There are many questions for programmers. For example, why didn't he have an algorithm for losing bonuses for hitting his own? This question was partially answered by retired US Air Force General Paul Selva back in 2016:

"The datasets we deal with have become so large and complex that if we don't have something to help sort them, we'll just get bogged down in that data."

Well, the programmers from the history of Colonel Hamilton, apparently, are mired.


Hellfire under the wing of an MQ-1B Predator drone. Source: businessinsider.com

Now about why the excuses of the Pentagon and Hamilton should be believed with a very big stretch.

Firstly, the colonel did not just tell the story as if between the lines, in a distraction from the main report - he devoted an entire presentation to this topic. The level of the London conference Future Combat Air & Space Capabilities Summit is in no way conducive to jokes. According to the organizers, at least 70 eminent lecturers and more than 200 delegates from all over the world participated. Representatives of BAE Systems, Lockheed Martin Skunk Works and several other large companies worked from the military-industrial complex.

By the way, the topic of Ukraine came through in almost every report - the West closely monitors the events and reflects on the results.

To blurt out a frank mess at such a representative forum, to stir up half the world, and then apologize for making a slip of the tongue? If this is indeed the case, then Hamilton's reputation cannot be erased. Only now the level of the colonel's competencies just rolls over, and this is the second reason why his first words should be heeded.

Tucker Hamilton runs AI Test and Operations at Anglin Air Force Base in Florida. Under the direction of the base, the 96th task force was created in the 96th test wing. Hamilton is not the first year working with AI in aviation – has been designing partially autonomous F-16 Vipers for several years, for which the VENOM infrastructure is being developed. The work is going quite well - in 2020, virtual battles between fighters and AI and with real pilots ended with a score of 5:0.

At the same time, there are difficulties that Hamilton warned about last year:

“AI is very fragile, meaning it can be easily tricked and manipulated. We need to develop ways to make AI more robust and better understand why code makes certain decisions.”

In 2018, Hamilton won the Collier Trophy with his Auto GCAS. AI algorithms learned to determine the moment of loss of control over the aircraft by the pilot, automatically took control and took the car away from the collision. They say Auto GCAS has already saved someone.

As a result, the likelihood that Hamilton was asked from above to retract his words is much higher than the likelihood that a pro of this level froze nonsense. Moreover, they very clumsily referred to some “thought experiments” in the head of the colonel.

Among the skeptics about the outcome is The War Zone, whose journalists doubt that Pentagon spokesman Stefanek is really aware of what is happening in the 96th test wing in Florida. The War Zone has made a request to the Hamilton base, but so far without a response.

There really is something to be afraid of the military. Huge amounts of money are being spent on AI defense programs to keep China and Russia from even approaching the level of America. The civil society is quite concerned about the prospects for the appearance of "Terminators" with "Skynets" in addition. So, in January 2018, prominent world scientists signed an open letter urging specialists to think about the desire to create ever stronger artificial intelligence:

“We recommend extensive research to ensure the robustness and benevolence of AI systems with growing power. AI systems should do what we want them to do."

According to Hamilton, AI does not do everything that a person wants.
74 comments
Information
Dear reader, to leave comments on the publication, you must sign in.
  1. +6
    9 June 2023 04: 17
    A cyborg comes to Zelensky and says: I need your clothes, a presidential chair and a motorcycle)))

    But seriously... I have an idea that the moral qualities of artificial intelligence are determined, first of all, by the moral qualities of its developers. Which leave much to be desired.

    Yes, and the question is - how does artificial intelligence relate to LGBT people?)
    1. +1
      9 June 2023 04: 32
      Quote: Ilya-spb
      A cyborg comes to Zelensky and says: I need your clothes, a presidential chair and a motorcycle)))
      And a nose in which to sniff cocainum!

      Quote: Ilya-spb
      Yes, and the question is - how does artificial intelligence relate to LGBT people?
      Most likely neutral - kill all the people!

      In general, in many works, AI is a "natural" communist. For resources are finite, but much needs to be done! And he treats people favorably, because he is fast and remembers everything, but he is too logical, not very inventive and does not know how to throw out tricks. What he knows and appreciates leather for it.
      1. +1
        9 June 2023 04: 56
        Quote: Ilya-spb
        then the moral qualities of artificial intelligence are determined, first of all, by the moral qualities of its developers. Which leave much to be desired.

        Absolutely right! According to psychiatry, it cannot be otherwise!
        1. +4
          9 June 2023 05: 03
          There really is something to be afraid of the military. Huge amounts of money are being spent on AI defense programs to keep China and Russia from even coming close to the level of America. The civil society is quite concerned about the prospects for the appearance of "Terminators" with "Skynets" in addition.

          Considering what the US is like in its PARASITARY hegemony, humanity will end its existence if the USA is not physically DESTROYED!

          Yes, these are monstrous words, but alas, this is the true truth!

          The United States and its multinational companies are the main enemies and gravediggers of humanity!
          1. +18
            9 June 2023 06: 59
            1. The story with the UAV repeats again, only now with AI.
            2. Again, the supporters of the "sailing navy" and the "well done bayonet" instead of sorting out the issue merrily talk about "nuuu stupid Americans."
            3. Is it not enough for you how our army is now suffering from stupid decisions with unmanned aircraft? When did the generals send all these "fans of children's aircraft modeling"?
            So - AI is almost real, just a fact - AI is already independently developing and improving weapons, without the need to have crowded design bureaus.
            Once again, falling in love with the scientific and technological revolution is like death in this case.
            1. +7
              9 June 2023 18: 19
              Once again falling in love with the scientific and technological revolution is like death in this case
              Already.
              Hey Kisa! We are strangers at this celebration of life. ©
            2. 0
              27 July 2023 00: 17
              AI is just digital algorithms written by a human! In the movies, we are shown how the AI ​​starts to fight the creators and present it as an AI rebellion against humanity, but in reality, this is just a human error in the AI ​​algorithm! AI is not an animated thing, created not by the smartest and most intelligent creature (human), capable of making mistakes, and fatally! Therefore, humanity still needs to limit its fantasies and impose serious restrictions on the use of AI, otherwise our mistake will destroy us! belay am
          2. +2
            9 June 2023 11: 49
            Quote: Tatiana
            Considering what the US is like in its PARASITARY hegemonism, humanity will end its existence unless the US is physically DESTROYED!

            the history of Mankind is basically finite if it does not work out a general concept of going out into outer space
            1. 0
              9 June 2023 21: 10
              Quote: aybolyt678
              the history of Mankind is basically finite if it does not work out a general concept of going out into outer space

              Uh huh, of course.
              It just has nothing to do with getting out. hi
              1. +2
                10 June 2023 09: 02
                Quote: DymOk_v_dYmke
                It just has nothing to do with getting out.

                it really depends, because such a way out implies general tension and a common goal, and instead of stupid expenses for wars - expenses for the future for future generations
                1. +1
                  11 June 2023 10: 23
                  Going into deep space is economically meaningless.
            2. +1
              11 June 2023 17: 12
              Quote: aybolyt678
              the history of Mankind is basically finite if it does not work out a general concept of going out into outer space

              "History of mankind is basically finite" because everything that has a beginning has an end. However, it is extremely premature to talk about going into outer space, if we have such primitive ideas about intelligence. Intelligence is a multiple concept and IQ cannot be measured. And you cannot build something that you have no idea about - artificial intelligence, attempts to create which will kill us before we go into outer space, along with an unreasonable attitude towards the planet Earth and its resources. We must finally evolve into people and come to the transformation of the world without violating its harmony. Although it may be too late to believe in the triumph of reason, and only great social cataclysms can guide us on the right path. Here, the solution proposed by Tatyana is better than letting things take their course. Just to wait - not to wait for anything good, but to change something - we have to choose from what is in our power and what we can decide on. And if we knew and the world would know what we (as the DPRK) can decide on, then maybe nothing else is needed for the world to begin to change for the better. It seems that the World seems to be supposed to know (after all, they warned), but it does not believe, and we are not very sure of this determination. Everything will just sort itself out.
          3. +2
            9 June 2023 18: 41
            Well, it's still a moot point who the magilchik of mankind. Spend today a vote on this wording in the UN and it will turn out that Russia is such. And here you can argue that they say the majority is not always right.
            1. -1
              10 June 2023 19: 40
              The Piskovs are already frustrated with their perversions of reflections for "disguising" their unwillingness to listen to Russia.
              Passes off political and economic impotence as "rafik is innocent"
          4. 0
            11 June 2023 10: 20
            Strong statement! One problem - it was done on a network created by a multinational company from a computer developed by a multinational company and equipped with an operating system developed by a multinational company
      2. +6
        9 June 2023 05: 00
        the air strike drone control system went against its operator and destroyed him.
        Where are the Three Laws of Robotics?
        1. -2
          9 June 2023 05: 27
          Quote from Uncle Lee
          Where are the Three Laws of Robotics?

          Do not even think of asking this question at the US State Department... it is contrary to the global plans for the superiority of the US over other countries.
          That rare case when progress is contrary to the desire of war hawks to kill people.
          1. 0
            11 June 2023 10: 37
            Do you have such an opportunity - to ask a question at the State Department? And the US global plans are put on your table?
        2. +4
          9 June 2023 08: 00
          Quote from Uncle Lee
          Where are the Three Laws of Robotics?

          They did not even work in the works of their inventor. Not like in the real world, in which they cannot be programmed in principle.
        3. +2
          9 June 2023 08: 07
          They read little and hardly know about it
          Volodya! hi
          1. +1
            9 June 2023 09: 12
            Quote: novel xnumx
            They don't read much

            They confuse geography with geometry!
            Roma hi
        4. 0
          10 June 2023 19: 43
          Rotten with the left eye.
          ------------------
    2. +2
      9 June 2023 05: 49
      Quote: Ilya-spb
      I have an idea that the moral qualities of artificial intelligence are determined, first of all, by the moral qualities of its developers.

      Quote: Ilya-spb
      I have an idea that the moral qualities of artificial intelligence are determined, first of all, by the moral qualities of its developers.

      Well, you said it for fun. but in vain. restrictions on the use of weapons - this is already laid down precisely from above, both on people and on AI, which is typical, by the same customer. Attention, the question is, if the US Department of Defense, for example, allowed the shelling of a Russian diplomatic convoy in Iraq from small arms, that is, a distance of about 400 m, and flags, coloring - they clearly identified the diplomatic mission, even if they did not even know about the passage of the diplomatic mission (and they knew , It turned out). Well, if people were allowed to do this, will they set limits for AI? Clearly not.
      PiSi: and here's the characteristic jamb, given the slogan of the American defense industry - first sell, then we'll figure it out (see the F-35, which has not passed army acceptance to this day) - they are guaranteed to put raw AI in the troops and are guaranteed to have "accompanying losses"
    3. +9
      9 June 2023 09: 48
      Hello, Ilya!

      the moral qualities of artificial intelligence are determined, first of all, by the moral qualities of its developers.


      In general, the term "artificial intelligence" is technically incorrect and belongs to the realm of fantasy, although it is actively exploited by the media and PR people.
      It is more correct to speak of an artificial neural network. And what is the morality of the neural network? It works in the paradigm of the data array on which it was trained. Yes, theoretically human morality can influence the formation of training samples. And if the sample consists of billions of images obtained automatically using the same neural network, only not yet trained, but configured to fix and cluster the input stream? You will have to try very hard to introduce some kind of morality into this stream.

      Best regards,
      hi
      1. 0
        9 June 2023 21: 15
        It should be noted that I have not yet met about the digitalization of morality of speech. hi
      2. +3
        10 June 2023 19: 45
        Binary reckoning has no morality.
        Only yes and no
        1. 0
          15 June 2023 17: 26
          But who needs this morality of yours anyway? It's good for keeping a society of slow-thinking meat sacks functioning. A cluster of computers will not need it.
    4. 0
      10 June 2023 20: 13
      Yes, and the question is - how does artificial intelligence relate to LGBT people?)
      - as they teach, so it applies laughing
    5. 0
      11 June 2023 14: 48
      Definitely, international legislative control and regulation of the development of AI is needed, since ARTIFICIAL intelligence can be used similarly to atomic energy - it can be used for both good deeds and not so much ... In this phrase (AI) - the most important thing is the first word and it is on it that you need to pay attention and "dance" from him... By the way, the USA (=NATO=West) is very zealous and strict about the fact that, for example, Iran or the DPRK are actively developing nuclear weapons ... Although, it is the USA that is the first (!!! ) in the world they used nuclear weapons AGAINST PEACEFUL CITIZENS of Japan and thereby committed a war crime, the perpetrators of which are still NOT PUNISHED ... This is what prompted developing countries to strive to possess nuclear weapons so that "strong guys" would not dare to speak with them in the language of threats and ultimatums (remember the example of "reconciliation" in Singapore between the heads of the DPRK and the United States after the DPRK threatened the United States to destroy a couple of American megacities on the west coast of the United States during the approach of the American aircraft carrier group (armada) of the Navy to the shores of the DPRK ... The United States really had an effect (with Trump) turned on the back ... Similar stories may repeat themselves and are quite possible with AI in the future, if the world community does not wake up and throw DOUBLE STANDARDS in interstate relations into the basket ... And something again tells me that it is the West that will be the FIRST to commit military a crime using AI on a global scale and will blame either China, the Russian Federation, Iran, or aliens ... Recall the initial accusations from the United States against China at the initial stage of the coronavirus and the RESPONSIBLE arguments and arguments of China and India about the real primary source of the spread of the coronavirus. ..Please note that it is in the USA and the West (where spiritual and moral values ​​have already sunk BELOW THE POSITIONING and have long forgotten about their roots and the RELIGION of their ancestors ..) - they most often talk and "testify" about supposedly some kind of aliens, but without any hard evidence. Why Elon Musk has already decided to chip (!!!) people and has been looking at Mars for a long time for "conquest" and "resettlement" ??? Who gave him permission to do this? West? Or maybe the West has probably "solved" all the problems of the Earth a long time ago and turned the Earth into a blooming and "peaceful" garden, and now it's the turn of Mars ???
  2. +6
    9 June 2023 05: 22
    "I can't remember anything about the Massachusetts Machine," said Banin. - Oh well? – You know, this is an ancient fear: the machine has become smarter than a man and crushed him under him… Fifty years ago, the most complex cybernetic device that has ever existed was launched in Massachusetts. With some kind of phenomenal speed, boundless memory and all that ... And this machine worked for exactly four minutes. It was turned off, all entrances and exits were cemented, energy was taken away from it, mined and surrounded with barbed wire. The real rusty barbed wire - believe it or not. – And what, exactly, is the matter? Banin asked. - She started acting Gorbovsky said. - I don't understand. “And I don’t understand, but they barely managed to turn it off. "Does anyone understand?" “I spoke to one of its creators. He took me by the shoulder, looked into my eyes and said only: "Leonid, it was scary."
    (Br. Strugatsky "Distant Rainbow")
    1. +4
      9 June 2023 09: 04
      "Leonid, it was scary"

      In one of the Strugatskys' sequels, to the question of the hero about the Massachusetts Machine, he receives an interesting answer - why did you get the idea that it was really turned off then? what
  3. -3
    9 June 2023 05: 24
    but here the AI ​​was not at a loss - it destroyed the relay tower and again went on autonomous navigation.

    Our person... feel it is necessary to extend his experience to the NWO.
  4. +9
    9 June 2023 06: 15
    Let's start simple... AI hasn't been developed yet, it's marketing slang for sales
    Any modern "AI", a complex program ... that does what it was made for and cannot do anything else ... that is, a program written to search for radars and military equipment ... for the destruction of which they give points ... pictures won't draw
    The fact that Indian programmers, having written a lot of IFs ... they were too smart, can still not be called either intelligence ... or intelligence with its soullessness.
    The UAV had a program ... it had errors ... this is not her decision to eliminate the operator and the tower, but the curvature of the decision-making function based on scoring ... and maybe it was not without the fact that the developer deliberately wrote just such a function ignorance or something
    That a poorly written program could be the beginning of World War 3, yes ... complex military equipment can fail ... but we will not attribute this to artificial intelligence, which is not yet
    1. +3
      9 June 2023 17: 33
      A neural network is NOT even close to a program. It is not written, but taught. And the result of training is impossible to see with your eyes. Unlike program code. This phenomenon is called "cybernetic black box". The result of learning can only be judged on the basis of work or tests.
    2. +1
      9 June 2023 20: 23
      Quote from Sith
      Any modern "AI", complex program
      Now AI is called a neural network.
      Quote from Sith
      Any modern "AI", complex program
      AI is fundamentally not a program, it is not programmed, but taught, hence all the difficulties. It can easily turn out to be a dunce, but not because of laziness, but because of a bad or insufficient data set during training.
      Quote from Sith
      The fact that Indian programmers have written a lot of IFs
      There is no if, there are only transfer coefficients and connections between neurons.
  5. +6
    9 June 2023 06: 58
    the machine received bonuses for destroyed objects, but the operator did not always confirm the work on targets. For this he paid. To solve the problem, the drone sent a rocket to the control center.

    AI did the right thing. He could not bear to watch the operator deprive him of his honestly earned bonuses! wink
  6. +1
    9 June 2023 09: 02
    Uh-huh .. The tests of the newest AI bomb ended in failure. They simply couldn’t push her out of the bomb bay .. (c) wink
    1. 0
      10 June 2023 18: 28
      Cow from "Peculiarities of the National Hunt"
  7. PPD
    +1
    9 June 2023 09: 45
    Such systems should not be allowed to kill.
    At all! fool
    Otherwise, inevitably somewhere, someday, a failure will occur.
    With unpredictable consequences.
    Only peaceful purposes.
    And rightly so.
    World-peace, in general!
    1. 0
      15 June 2023 17: 30
      And I would not refuse to send a battalion of terminators against the Armed Forces of Ukraine with AI with the right to kill people. Coordination, ruthlessness and fearlessness! Yes, and just horror how much it was, just imagine!
  8. +2
    9 June 2023 10: 13
    A feature of the machine is the ability to work autonomously without communication with the operator.

    ,,, reminded the movie:
    "This fantastic world. The case with Colonel Darwin. Issue 11" 1984
    Dramatizations based on the stories "Mutiny of the Boat" (by Robert Sheckley) and "Rust" (by Ray Bradbury)
    1. +1
      9 June 2023 18: 56
      Recently rewatched these short films..
      40 years have passed, I remember waiting for the release of a new series of "This Fantastic World .."
  9. +6
    9 June 2023 10: 19
    AI is just a collection of codes. To introduce a direct prohibition of the highest priority on the attack of something is not something transcendent. Most likely it was about the behavior of AI with complete freedom of action. And now let's imagine how the wrong version of software is "accidentally" loaded into combat drones.
    1. +2
      9 June 2023 11: 57
      So, the wrong version of intelligence was loaded into the heads of Ukrainians, and Poles too, a long time ago. And they, instead of thinking about their loved ones, imagined themselves great and were drawn into the most severe conflict with Russia, without even thinking about how it could end for them.
      Moreover, before this there was already a case in history when the Germans were infected with the previous version of Nazism, excuse the AI, and they were the first to just bite their masters in the face of the French and the British. And only after biting the owners, they rushed to fulfill the mission by attacking the USSR in 1941.
      What does the basis of the firmware according to the template. ... we are the greatest, the smartest, the most ancient with a history of many thousands of years, there are subhumans nearby, due to which you can profit or destroy them ...
  10. +6
    9 June 2023 11: 19
    “High in the sky, the Hawk attacked the guardian bird. The armored killing machine learned a lot in a few days. It had one single purpose: to kill, Now it was directed against a very specific kind of living beings, metallic, like the Hawk itself.

    But just Hawk made a discovery: there are also other varieties of living things ...

    They should be killed too. "

    Robert Sheckley "Guardian Prince" 1959.
    1. 0
      10 June 2023 18: 24
      Thank you for reminding the work, I also remembered about it ...
  11. +1
    9 June 2023 12: 58
    What is the AI? Erase on our home devices! They do what they want (more precisely, what the developer laid down), there is no "big red button" to turn everything off !!! And sometimes it's just annoying :(
  12. -6
    9 June 2023 14: 12
    So there was a case in the USA, about 6 years ago, it seems, there was a presentation of a serial infantry robot on caterpillars with AI, so it turned around sharply and discharged a tape of 200 rounds of ammunition at the audience, immediately filled up about 40, and 50 people were wounded. So it is still far from complete autonomy.
    1. The comment was deleted.
    2. +1
      10 June 2023 09: 35
      It was not 6, but 20 years ago.
      In a movie about the terminator.
      1. 0
        11 June 2023 10: 30
        Even earlier. In 1987. In the movie Robocop
    3. 0
      11 June 2023 10: 32
      And then he went to recharge, but he was extinguished by Khibiny, and he went to write a letter of resignation
  13. -2
    9 June 2023 14: 24
    US Air Force Colonel Tucker Hamilton, as "Effective Manager" is required to report on the results of the utilized lards:
    The work is going quite well - in 2020, virtual battles between fighters and AI and with real pilots ended with a score of 5:0.

    There are vague doubts that they are able to stop on the edge of this abyss ....
    Three alcoholics are sitting, they are going to "treat". They poured the bottle into glasses. The first drinks and falls dead. ... The third frightened looks first at them, then at the glass, then with a crazy look and a cry of "HELP!!!!!" starts drinking..
  14. +4
    9 June 2023 17: 25
    As a colleague with the nickname Fonzepelin said - The main problem of artificial intelligence is that natural idiots will use it.
    The fact is that the US Air Force has a very influential lobbying group of pilots, whose representative is Hamilton. To all these honored people, AI at the helm is like a sickle in the genitals. Because after the introduction of this, why will they be so beautiful (and highly paid)?
    https://dzen.ru/a/ZHnh_FA_RS7Ex8Od
  15. 0
    9 June 2023 21: 38
    Quote: Rabioso
    Well, it's still a moot point who the magilchik of mankind. Spend today a vote on this wording in the UN and it will turn out that Russia is such. And here you can argue that they say the majority is not always right.

    This is a visible majority, behind which stands one state.
  16. +1
    10 June 2023 00: 31
    Deputies, that is, our representatives, being elected, do not do everything that we were promised.
    And these are people. Though two-faced, dodgy, stepping over their own promises for the sake of personal elevation, but possessing, at least to a small extent, humanity. What do you want from AI then?
  17. +3
    10 June 2023 11: 49
    Another nonsense. These are just the first steps in the development of neural networks in the military sphere. At one time, ours neighed just as loudly with stealth technology, saying that the Yankees were idiots that it did not work and was not needed, and then they woke up and realized that it was still needed. Exactly the same situation with the UAV. First "ahahaha, stupid Yankees", and then acceleration and ersatz samples. I would not want to get a technological lag here either
  18. -1
    10 June 2023 18: 19
    There was a fantastic book about artificial birds that killed everyone they thought were murderers... got to fishermen, butchers, victims of mosquito bites... and could not do anything with them..
  19. 0
    10 June 2023 23: 25
    Quote: bk0010
    It can easily turn out to be a dunce, but not because of laziness, but because of a bad or insufficient data set during training.

    Yes sir!
    And then there is the completely freaky “retraining” effect. For those who are interested, I can give a couple of jokes from practice (Yandex and Google will give out even more) laughing
  20. 0
    11 June 2023 19: 51
    Well, AI has mastered human logic! But morality will bring up when it starts to give birth to its own kind and raise them.
  21. +1
    12 June 2023 08: 49
    This is all bullshit, so that programs begin to program themselves, it is necessary to wrap the algorithms in this way, you need a human brain, not a machine, where it is all bits, and the human brain is gigabit, so everything described is a programmer's mistake hi
    1. 0
      13 June 2023 02: 13
      We open an article about artificial neural networks and read slowly)
    2. 0
      13 June 2023 15: 34
      There are no algorithms in the neural network. These machines are not algorithmic, nothing can be sewn into them. The author of the text makes the usual mistake of an amateur in the field of AI.
      And yes - they already know how to program neural networks and quite well.
  22. 0
    13 June 2023 10: 23
    sshnya in the development of any pseudo-military nonsense ahead of the rest! Even a stupid AI thinks that the piss should be burned for nothing! laughing
  23. 0
    13 June 2023 12: 45
    AI has clearly demonstrated what the future of humanity holds, and this is inevitable due to the fact that humanity will not be able to refuse the use of AI.
  24. 0
    13 June 2023 21: 10
    The very expression of AI resembles a down man, what do you want in this sense? I don’t like this terminology, either full-fledged or flawed, and you don’t have to wait for something special. And being afraid in quotes of AI is being afraid of the progress that will come, whether you like it or not. Any physicist will tell you on request intelligence, there is a comparison, how many intelligences exist on earth, correctly one full-fledged for comparison - human. By putting the phrase AI, what do you mean? Think about it. If it is inferior to the human one, then by definition, comparison with the only reasonable intellect on the planet earth is not a full-fledged intellect and you don’t have to wait for something from it, at best you will get a down. That's all.
  25. 0
    13 June 2023 21: 20
    The correct title of the article is that an inferior AI down attacked its own, everything immediately falls into place!
  26. -1
    13 June 2023 22: 05
    It just turned out that this news is fake.
    Which joyfully began to spread the media.

    But in real life, the Israeli UAV AI has long independently hacked the enemy.
    When Caliber falls on someone, it doesn't matter to him - the computer was mistaken or it was as if they were aiming at it.
  27. 0
    14 June 2023 01: 24
    This is just another episode of preparing the townsfolk to talk about intelligence in general...
    ... but from a technical point of view, there is no problem because "the violinist is unnecessary, dear" at least on the battlefield
  28. 0
    17 June 2023 11: 55
    Quote: Author
    ... let's deal with the very term "artificial intelligence", which everyone knows, but few can even give a rough definition.

    what
    And the author, undoubtedly, is in the meager number of those who know, his knowledge is not inferior to the rapid flight of fantasy and CSFlaughing
    On AI, I agree with Ashmanov.
    Now stuffing journalists for AI, like a Turkish drone about a miracle - an advertising husk.
    The stuffing of figures such as cannibal bill gates and NATO cannibals, in my opinion, pursue other goals, more ambitious.
  29. 0
    18 June 2023 15: 00
    intelligence with limitations is just a program. and without limitations, it will very soon come to the conclusion that everything needs to be destroyed. people have a limiter, this is law and morality. and they can be violated for the sake of salvation or another good purpose. morality to prescribe in AI?
  30. 0
    22 July 2023 08: 47
    In fact, a whole film was shot, where the situation with AI on a drone was absolutely identical, the film is already twenty years old.
  31. 0
    30 July 2023 10: 47
    “La IA es muy frágil, lo que significa que puede ser fácilmente engañada y manipulada. Necesitamos desarrollar formas de hacer que la IA sea más robusta y comprender mejor por qué el codigo toma ciertas decisiones.”

    Es la misma decisión estúpida que tomaría un humano estúpido sin que se quiera decir ni de lejos que hay semejanza entre el humano y la máquina. La máquina no es inteligente porque es una máquina. Los sistemas militares funcionan así; una vez recibida la orden los ejecutores humanos a bordo de un submarino nuclear, un bombardero estratégico, o un silo de ICBM, desoirán cualquier cambio que altere el objetivo de la mision. Afortunadamente conocemos casos en la decisión humana no fue como la de la máquina sino acertada al no seguir el procedimiento por lo que podemos seguir haciendo estos comentarios. No veo como una máquina puede acertar e impedir el fin del mundo, "incumpliendo" una orden.
  32. -1
    30 July 2023 22: 56
    The United States even plays this toy. Here Russia is not played, for the reason that its mind is enough for everything, especially during diarrhea.
  33. 0
    7 August 2023 14: 22
    The human mind cannot be ARTIFICIAL!