Military Review

Deep fake technology as a reason to start a war

53

New world with deep fake



Now almost every iron has the Internet, it is becoming increasingly difficult to find a point on the map where the 4G network would not be available. Broadband Internet is primarily HD video on common platforms that are gradually replacing for us. news feeds, analytics and just entertaining reading. In addition, it is an instrument of influence on billions of people, which allows creating the right public opinion at the moment. At the peak of these games with the public, deep fake technology, which has already proved its ability to turn celebrities such as Gal Gadot, Scarlett Johansson, Taylor Swift, Emma Watson and Katy Perry, into the stars of the porn industry, may turn out to be. Dipfake is an algorithm that allows you to simulate the behavior and appearance of a person in a video. The technology got its name from the combination of deep learning (“deep learning”) and fake (“fake”). At the heart of deep fake are the notorious neural networks operating on the Generative Adversarial Network (GAN). The algorithms laid down in the base of the program constantly compete with each other in two processes: training in the photographs presented in order to create a real change of face for a copy and elimination of unsuitable options until the machine itself begins to confuse the original and the copy. In this complex scheme, the main goal of deep fake is to create false photos and video content in which the face of the original is replaced in a different way. For example, the charismatic US President Donald Trump could well take the place of any odious leader of the XNUMXth century and carry an open heresy to the masses from the rostrum. Now in one of the generated deep fake videos, former President Barack Obama allowed himself to use foul language against Donald Trump.


Of course, at first, the deep fake advertisement was exclusively viral - the faces of Hollywood actresses implanted in uncomplicated plots of porn videos with all the ensuing consequences. Or, for example, actor Nicolas Cage suddenly becomes the main actor in episodes of the most iconic films of our time. Some of these creations are presented on video and, frankly, many of them look somewhat clumsy.




But hip-hop singer Cardi Bee, who appeared on the evening show Jimmy Fallon in the image of actor Will Smith, looks quite convincing.


And here is the original.


Crafts on the Ctrl Shift Face channel look good. For example, Sylvester Stallone tried on the lead role in Terminator.



Already, IT analysts are claiming that fake technology may be the most dangerous in the digital field in recent decades. In addition to this technology (and based on it), specialists from Princeton, Max Planck and Stanford universities built the Neural Rendering application. His task is even more dangerous - to “force” the image of a person to pronounce any text in any language with his voice. This requires only a 40-minute video with sound, during which the neural network will learn how to work with the voice and facial expressions of a person, transforming it for new verbal expressions. Initially, the idea, of course, is positioned as good: the main consumer of Neural Rendering will be film studios who want to reduce the time for shooting acting takes. However, it immediately became clear to everyone that in the near future, virtually any user can generate video fakes on their laptop from which their hair will stand on end. The second branch of the “fake” program was the DeepNude application, which is capable of quite realistic “stripping” any woman naked in the photo. In the early days of the service, the volume of failed requests was so large that the developers, in fear of lawsuits, announced its closure. But hackers hacked the resource, and now DeepNude can be used by everyone. Of course, they try to restrict the service in access, but everyone understands that this is only a temporary measure.

Deep fake turned out to be a good tool in the hands of scammers. One British energy company was robbed of 220 thousand euros, when a “clone” of a manager from Germany got in touch with the financial department. He asked in a simulated voice to urgently transfer money to an account in Hungary, and business partners had no reason not to trust a subscriber. Of course, now massively and efficiently faking a video sequence is problematic - state regulators are constantly blocking resources with fakeapp and facefake, and the power of computers does not yet allow for quick video synthesis. We have to delegate this work to remote paid servers, which require thousands of photos of the original person and the “victim” to work.

Casus belli


Deep fake technology may leave actors without work in the future - the entire industry may well switch to cartoon-like movie heroes, many of whom the technology will rise from the dead. But these are more likely dreams of the future, since numerous trade unions and an elementary lack of computing power pushes the prospect forward several years ahead. Although now in the film “Rogue One: Star Wars”, the director for one episode “resurrected” actor Peter Cushing, who died in 1994. Rumor has it that in the new film about the Vietnam War, the famous James Dean may appear. Neural networks help actors who are already years old look in the frame 10-20 years younger - for example, Arnold Schwarzenegger and Johnny Depp. On average, at least 15-20 thousand deepfake videos are generated every month in the world every month, most of which appear on the Internet. Russian programmers are trying to keep up with global trends - Vera Voice in July 2020 will invite fans of Vladimir Vysotsky’s work to communicate with a digital copy of the singer at the Taganka Theater.

Everything goes to the fact that video and photo evidence will cease to be effective arguments in litigation, and a global video surveillance system will be a waste of money. No one will trust frames from CCTV cameras. Where is the guarantee that this is not a synthesized dummy? In political propaganda, deep fake is already becoming a powerful lever for influencing voters' opinions. California in October 2019 became the first state to ban the posting of videos with political candidates 60 days or less before the election. For violation of this law, AB 730 faces criminal liability. Now several more states have joined this initiative, and in January 2020 it will be forbidden in China to publish synthesized deep fake photos and videos without a special mark. By the way, now one of the effective methods for determining fake by eye is the lack of natural blinking of the synthesized characters.


Stallone as a Terminator

Now imagine how the development of deep fake technology (and development can not be stopped until there is demand) will turn the idea of ​​truth and lies. Especially when the state structures adopt the novelty. Any synthesized video about the next espionage exposure with skillful use may be the reason at least for imposing another package of sanctions or closing the diplomatic mission. And it will not be necessary to stage chemical attacks on civilians in order to authorize a massive missile attack on a sovereign state. Acts of genocide, consequences of use weapons mass destruction, provocative and abusive behavior of the first persons of the state - all this deepfake bouquet in the right situation can justify the beginning of another military adventure in the eyes of voters. And when the guns speak and the rockets fly, no one will particularly remember with what revealing video the war started.

Now there is no definite answer what to do with this disaster. The best algorithms sharpened by exposing deep fake can guarantee only 97% probability. Currently, every progressive programmer can take part in the Deepfake Detection Challenge, which in September 2019 announced Facebook. A prize fund of $ 10 million will go to someone who can develop an algorithm that allows you to recognize fake videos with a 100% guarantee. One can only guess how quickly the response from the underground deep fake developers will follow.
Author:
Photos used:
youtube.com, profil.ru
53 comments
Information
Dear reader, to leave comments on the publication, you must to register.

I have an account? Sign in

  1. tracer
    tracer 17 December 2019 05: 29
    +3
    Hmm, a dangerous thing.
    1. knn54
      knn54 17 December 2019 11: 30
      +6
      You cannot buy an honest person, but you can defame him.
      It would be interesting to see alternatives (when the actors refused) films, for example, "Ivan Vasilievich Changes His Profession" with Nikulin and Mironov.
      PS "17 Moments of Spring" with Gaidaevskaya troika and Krachkovskaya (radio operator Kat) not to offer ...
      Brave New World indeed.
      1. Shurik70
        Shurik70 20 December 2019 23: 45
        0
        And how DPS will be happy ...
        Now they are afraid of DVRs, on far-fetched pretexts they are not fined.
        But in the future, any accusation against themselves will be called a synthesized fake
        belay
        1. trenkkvaz
          trenkkvaz 22 December 2019 22: 56
          0
          I'm sure there will be a technology for verifying the veracity of the video.
          Some verification that the video was recorded at a certain time and does not have corrections.
          Perhaps this can be done through the server using cryptography.
          5G technology, which provides enormous connection speed, will just help to realize such things.
  2. Mytholog
    Mytholog 17 December 2019 06: 03
    +1
    This technology is dangerous only in the period of adaptation to it, while not everyone still knows about it. When it becomes the "expected joke" - its danger will be zero.
    1. Aerodrome
      Aerodrome 17 December 2019 07: 01
      0
      Quote: Mytholog
      her danger will be zero

      this "zero" can rule the country ...
      1. cniza
        cniza 17 December 2019 09: 05
        +3
        "Zero" cannot lead by definition, but everything can multiply by itself ...
    2. Aleksandre
      Aleksandre 17 December 2019 10: 39
      +1
      Quote: Mytholog
      This technology is dangerous only in the period of adaptation to it, while not everyone still knows about it. When it becomes the "expected joke" - its danger will be zero.

      It can be assumed that during the existence of the so-called yellow press all Already had the opportunity to adapt to it, or at least a little adapt. But nevertheless, this is not so, and for every throw on the fan there follows an immediate violent reaction of devoted readers and consumers, for human stupidity is infinite.
  3. Kirill Dou
    Kirill Dou 17 December 2019 06: 17
    -1
    I live in the world of William Gibson wink At last.
    1. depressant
      depressant 17 December 2019 08: 34
      +3
      I recently saw a video. Different people with different cats in their arms are looking at the screens of their mobile phones. The application puts a cat's face on a person's face. All cats react in the same way: the cat stares at the screen in surprise, looks at the owner, sees that he is normal, then back to the screen, the animal's cognitive dissonance reaches its limit, and it breaks from the owner's knees in horror, the fur is reared, some cats hiss. .. And videos with supposedly famous politicians and actors are already an old topic, the Internet is full of them. Again, the "animated" Princess Leia in the new "Star Wars", Obama speaking the speech ... And you can't tell! And because you cannot tell, apparently, the era of politicians is coming, saying what is needed behind the scenes, which is available in every country. However, hasn't it always been so? Only it was more expensive, the politician had to be paid. And now it will be cheaper, since only the programmer will have to pay.
      1. Kirill Dou
        Kirill Dou 17 December 2019 08: 47
        0
        I recently saw a video. Different people with different cats in their arms look at the screens of their mobile phones.
        - Also watched it, it was funny)
      2. AlexVas44
        AlexVas44 18 December 2019 09: 23
        0
        Quote: depressant
        the cat stares at the screen in surprise, looks at the owner, sees that it is normal, then back at the screen, the animal’s cognitive dissonance reaches its limit, and in horror it breaks off the owner’s knees, the fur is raised, some cats hiss ...

        I didn’t look, but according to the description I imagined it very vividly, and if I’m also the owner a la Obama, you can put in .......... laughing laughing good fellow
  4. g1washntwn
    g1washntwn 17 December 2019 06: 25
    +4
    For example, the UltraHD format is 3840x2160 = 8294400 pixels. Is it so difficult to "scatter" the so-called "faulty squeaks" over the picture? "watermark" so that the overall quality of the picture is affected minimally for the eye?
    Make this character dynamic with an encrypted key. Well, the other blockchain and quantum technology to help.
    1. Stas157
      Stas157 17 December 2019 08: 17
      +7
      Quote: g1washntwn
      Is it so difficult to "scatter" the so-called "faulty squeaks" over the picture? "watermark"

      I understand this to protect the original video. And how to recognize a fake if the video was not protected?

      The algorithm can fake the picture, but the behavioral model, psychological portrait, is unlikely so far. But it is precisely by this key that we unmistakably identify Mikhan or Golovan in any incarnations. Without even seeing the pictures.
      1. depressant
        depressant 17 December 2019 11: 23
        +3
        Yes, not at all hard. They will attract a psychologist, and he will definitely correct the video for you so that you won’t notice that the song is fake, not real. I suppose there are already programs that analyze the behavior and manner of speaking of a real figure in journalistic reporting. There’s no need to talk about the voice. This is the 21st century, everyone fakes it! For example, when she listened to Putin’s speech, in which he urged us to treat pension reform with understanding, I could not help feeling that it wasn’t Putin. Well, not him, that's all!
        1. Maximilian37
          Maximilian37 18 December 2019 06: 23
          0
          Quote: depressant
          ..... For example, when she listened to Putin’s speech, in which he urged us to treat pension reform with understanding, I could not help feeling that it wasn’t Putin. Well, not him, that's all!


          This is because in the media everything was framed as if this government decided to raise the age, and Putin, like, had no choice but to sign the law. And your mogz refused to the last to believe that he was "FOR" raising the retirement age.
        2. g1washntwn
          g1washntwn 19 December 2019 07: 51
          0
          The copyright from the pictures can be erased and replaced, but after that, even with a 100% pixel match, the "cracked" picture ceases to be an authentic copy. You can safely call it fake. So it is with modified "video evidence from the Internet." No certified "digital signature" is not evidence. Above the media that used such "hacked" info-material, comrade Themis waves his sword and it dives headlong into a vat with "this-most".
      2. bk316
        bk316 17 December 2019 16: 57
        +1
        And how to recognize a fake if the video was not protected?

        No article is about it. EVERYTHING - FORGOTTEN ABOUT VIDEO EVIDENCE FROM THE INTERNET.
        There is nothing wrong with that, there will be problems only during the transition period until it reaches the people.
      3. g1washntwn
        g1washntwn 19 December 2019 08: 09
        +1
        It is indecent to answer a question with a question, but it contains an answer. Does an electronic document without EDS have legal force? No. Also, the footage needs to be approached. Any non-truth set of pixels is the default fake.
    2. dmmyak40
      dmmyak40 17 December 2019 11: 59
      +3
      Quote: g1washntwn
      For example, the UltraHD format is 3840x2160 = 8294400 pixels. Is it so difficult to "scatter" the so-called "faulty squeaks" over the picture? "watermark" so that the overall quality of the picture is affected minimally for the eye?
      Make this character dynamic with an encrypted key. Well, the other blockchain and quantum technology to help.

      Well laid out, dog!
  5. eagle owl
    eagle owl 17 December 2019 07: 08
    +6
    Yes, in general, nothing special. First, they always knew how to manipulate information. Since the time of the bylins of the stray guslars. They carry nonsense about Ivan Tsarevich? Go check it out! To the distant kingdom. And collages from photographs and retouching began to be used right from the very invention of photography.
    Here is just another aspect. In general, you just need the same thing - the ability to work with information ...
    PiSi: and about "unemployed actors" - it's not news at all - a long-standing fact, from which the whole world laughs - now in the advertisement of any film you will be told about "special effects" 20 times. And that the film is not. rehash of the old, Terminator-28 ... So back in the 80s it was cut short in the movie "Back to the Future". What kind of "jaws" were there?
    1. g1washntwn
      g1washntwn 19 December 2019 08: 04
      0
      I agree. The first fakes began to scratch on the walls of the caves - the rock fantasy "How we defeated the mammoth". The process of lying simply changes forms, methods and speed. Perhaps, in the future, fake memories and reflexes will be sewn into us already at the stage of birth, immediately into the subcortex and exclusively for the existing political situation, this will greatly facilitate the work of the local State Departments and Politburos.
  6. rotmistr60
    rotmistr60 17 December 2019 07: 24
    +8
    Today, even without deep fake, the idea of ​​truth and lies is turned upside down. Therefore, a terrible thing happened even without computer programs - fakes, accusations without evidence, massive information attacks in the media, overt provocations ... A reason for the outbreak of armed conflict may arise without this computer program.
    1. cniza
      cniza 17 December 2019 09: 02
      +3
      Yes, a lot is going on in a simple word, without any justification or evidence.
  7. awdrgy
    awdrgy 17 December 2019 08: 33
    0
    "The global video surveillance system will turn out to be a waste of money" And this is good, we don't want to be constantly monitored. It's unpleasant. At the entrance, there is an intercom with a camera on the house, there are cameras everywhere, and everything is available if I want to. Maybe I went to my mistress?)
    1. Pavel57
      Pavel57 17 December 2019 09: 36
      +2
      You can always play it off - a lover is a fake.
      1. awdrgy
        awdrgy 17 December 2019 09: 41
        -1
        Yes sir! Or "it wasn't me, it's a fake" Although I'm afraid it won't work
  8. srha
    srha 17 December 2019 08: 35
    +1
    Hee, enter fake so deep that you completely forget about the long-known, several thousand years old information security. Probably our distant ancestors also suffered the question that anyone can knock out fake tablets on a stone, then equally fake papyri, then articles on paper, bills, transactions, well, got to the video ... But put a stamp and signature certified witnesses to the video (electronic-digital in the case of digital video, the technology has long been known and costs a couple of cents), and the fake, those who have mastered the protection technology, will not pass.
  9. cniza
    cniza 17 December 2019 09: 00
    +2
    A prize fund of $ 10 million will go to someone who can develop an algorithm that allows you to recognize fake videos with a 100% guarantee.


    Is this a lot or a little for such a development? this is the whole problem, the solution may cost "three kopecks" or trillions ...
    1. spech
      spech 17 December 2019 10: 54
      +1
      A prize fund of $ 10 million will go to someone who can develop an algorithm that allows you to recognize fake videos with a 100% guarantee.

      since the time of the Inquisition these "algorithms" are known am
  10. Pavel57
    Pavel57 17 December 2019 09: 35
    +1
    Everything is more wonderful and wonderful.
  11. VictorStar
    VictorStar 17 December 2019 09: 51
    +1
    Well, "Generation P" has come, since this has gone to the masses, then it costs nothing to draw a top boss.
  12. Mikhail3
    Mikhail3 17 December 2019 10: 40
    +1
    All this has already led to the fact that no video or photo can no longer be considered proof of anything. That's all. Some of the tools have simply been taken away from forensics. On the one hand, all this is not encouraging, but on the other ... now the "deserted" security systems will have to be re-filled with people, since the cameras will only have the task of prompt information, and to prove something, a living witness is needed - a security guard who has seen it personally.
  13. MoJloT
    MoJloT 17 December 2019 10: 58
    +1
    “Make” the image of a person pronounce any text in any language with his voice.

  14. Avior
    Avior 17 December 2019 11: 41
    0
    Synthesized videos are determined during the examination, which means that there will be no serious problem
    It’s like with the story about “Mom, I’m your son, had an accident, give money to a man on the corner of the street, or they will put me in prison”
    We did not know yet, scammers managed
    1. bk316
      bk316 17 December 2019 14: 59
      0
      Synthesized videos are determined during the examination, which means that there will be no serious problem

      You do not seem to understand what the article is about .....
      In such cases, it might be better to remain silent?
      Ask yourself who will determine the fake?
      1. Avior
        Avior 17 December 2019 17: 42
        -1
        I understand perfectly
        And you do not seem
        As evidence in court, they will not pass, the examination will reveal signs of installation
        And the effect on the audience in the media or rebounds will quickly fade away after several exposure of such fakes
        They will have an effect only as long as most of the subconscious minds have the belief that the video is impossible to fake or very difficult
        As soon as it becomes clear to most that fake is possible, the effect of such stuffing will disappear
        So do you understand?
        1. bk316
          bk316 17 December 2019 18: 28
          +1
          As evidence in court, they will not pass, the examination will reveal signs of installation

          You don’t understand anything. laughing
          Goodbye.
          1. Avior
            Avior 17 December 2019 18: 29
            +1
            It looks like you
      2. Fat
        Fat 17 December 2019 18: 19
        +1
        Quote: bk316
        Synthesized videos are determined during the examination, which means that there will be no serious problem

        You do not seem to understand what the article is about .....
        In such cases, it might be better to remain silent?
        Ask yourself who will determine the fake?

        Like a software product that created a fake, what else. The task is being solved. It is clear that expensive. One bot makes a fake, the other recognizes it, with a certain degree of reliability, of course))). Programmers also want to earn money, passing AI to them in ...
        1. bk316
          bk316 17 December 2019 18: 41
          +1
          Like a software product that created a fake, what else. XNUMX

          It’s wonderful, as a matter of fact, I hinted at this to the airman.
          But he is stubborn, he does not need. He firmly believes in the superiority of the United States and also finds himself in the grandeur of expertise. laughing


          The algorithms laid down in the base of the program constantly compete with each other in two processes: training in the photographs presented in order to create a real substitution of the face for copies and elimination of unsuitable options until until the machine itself begins to confuse the original and the copy.


          That is, if the product is the same, then it will not reveal a fake. This means that a more "powerful" neural network is needed, and this is where the garbage lies, there is no metric of neural networks from the power point in the context of the problem being solved. The number of nodes, layers, topology does not at all indicate the quality of the solution to the problem. As a result, we will get a situation when one neural network in court says fake, the other is not fake. BUT THE NEURAL NETWORK CANNOT EXPLAIN WHY Fake. Moreover, any network solves this problem with some probability, that is, gives false-positive results.
          Summary: a similar software product cannot solve this problem.

          I understand that it is not hard for a programmer to understand this, but he already explained how he could.
          By the way, do you know that no programmer will write the word program with ONE M? laughing
          1. Fat
            Fat 17 December 2019 18: 54
            +1
            Quote: bk316
            I understand that it is not hard for a programmer to understand this, but

            I remembered a joke about a blonde who was asked what is the likelihood that she will meet a dinosaur on the street. The blonde said: 50%!
            Or meeting, or not meeting ...
            It is killer, but absolutely correct - the question is not enough boundary conditions)))) However.
            1. Good_Anonymous
              Good_Anonymous 18 December 2019 14: 09
              0
              Quote: Thick
              fifty%! Or meeting, or not meeting ...
              Slaughter, but absolutely correct


              The blonde confused the probability and the number of possible outcomes. If the answer seems correct to you, so will you.
          2. Avior
            Avior 17 December 2019 19: 14
            +1
            And I hinted to you that you misunderstood what the article says
            In what court did you see the examination of programs or neural networks?
            The examination is performed by a person, if necessary, in manual mode
            And during the examination, gluing and traces of installation will still come up, no matter what neural networks they are made, especially if you have to replace not only the face, but the whole human figure, or, especially, create from scratch
            Or, as an option, the examination will not be able to confirm the authenticity, which is not the same thing as recognizing a fake
            The programs about which the article is not needed for examination in court, it is for quickly identifying fakes, so that you can quickly identify such a fake, in real time in the best way
            And how sideways did you drag the USA here - it's generally a mystery
            hi
            1. Fat
              Fat 17 December 2019 21: 15
              0
              Quote: Avior
              And I hinted to you that you misunderstood what the article says
              In what court did you see the examination of programs or neural networks?
              The examination is performed by a person, if necessary, in manual mode
              And during the examination, gluing and traces of installation will still come up, no matter what neural networks they are made, especially if you have to replace not only the face, but the whole human figure, or, especially, create from scratch
              Or, as an option, the examination will not be able to confirm the authenticity, which is not the same thing as recognizing a fake
              The programs about which the article is not needed for examination in court, it is for quickly identifying fakes, so that you can quickly identify such a fake, in real time in the best way
              And how sideways did you drag the USA here - it's generally a mystery
              hi

              The programs that are discussed in the article are very suitable for creating a casus belli. The aggressor country needs a reason, not proof. Example: Operation Canned, a Powell tube at the UN, or white helmets. The article itself is titled.
              1. Avior
                Avior 17 December 2019 21: 46
                0
                As soon as such programs become widespread, it will cease to be taken seriously
                With the same success, you can fabricate a fake letter on a printer
                Just a video so far more trusted
                And as for Bellus’s incident, now it’s not a problem to make such an imitation, dinosaurs run around the screens, you won’t surprise anyone
                It’s just easier with the program, but for the purpose stated by you, it does not matter
              2. Good_Anonymous
                Good_Anonymous 18 December 2019 14: 15
                +2
                Quote: Thick
                The programs that are discussed in the article are very suitable for creating casus belli.


                Experience has shown that the Belli case is created without deep fake. And deep fake, as soon as the phenomenon becomes known, will quickly prove to be a means of practical jokes and skits.
            2. Greenwood
              Greenwood 18 December 2019 07: 52
              0
              Quote: Avior
              The examination is performed by a person, if necessary, in manual mode
              And during the examination, gluing and traces of installation will still come up, no matter what they are made by neural networks
              It is very naive to think that a person can accurately recognize this. At the moment, fakes are still visible, but in the near future, the human eye can hardly catch the difference. On the example of modern films, one can see that many of the drawn effects, phenomena and creatures are perceived as real. So far have technology advanced.
              1. Avior
                Avior 18 December 2019 08: 29
                +1
                Do you think that the examination of authenticity is watching an expert video?
                1. Greenwood
                  Greenwood 19 December 2019 05: 20
                  -2
                  I did not say that. I just gave an example of the development of technology in cinema. The bottom line is that there is no one hundred percent way to recognize a fake.
    2. Greenwood
      Greenwood 18 December 2019 07: 48
      -2
      Quote: Avior
      Synthesized videos are determined during the examination, which means that there will be no serious problem
      The article also says that there are no algorithms that with one hundred percent probability recognize deep fake video. What kind of expertise are you talking about?
      Quote: Avior
      We did not know yet, scammers managed
      And now it turns out if you are not in the know. People are still engaged in this wiring, especially pensioners.
      1. Avior
        Avior 18 December 2019 08: 31
        +1
        Algorithms - we are talking about programs that can recognize fakes in automatic mode
        1. Greenwood
          Greenwood 19 December 2019 05: 18
          -2
          Programs and work on the basis of the algorithms embedded in them. Your so-called "automatic mode" is also nothing more than the same algorithm, consisting of sequences of actions thought out by programmers. At the moment, there are no corresponding programs or corresponding algorithms.