Military Review

Deep fake: who gets the liar's dividend



Deep fake is a term that first appeared in 2017 to describe realistic photos, audio, video and other fakes created using deep machin learning technology. Actually, the term originates from this method.

American conveyor of honest forgeries

US experts say that in the near future the technology of deep counterfeiting (abbreviated as GP) may pose a serious threat to national security. This is stated in the report for the Congress "Deep Fakes and National Security". The main danger of SOEs is that, as a result of the resulting uncertainties, a factual vacuum is formed that influences political decision-making at the interstate level. For example, the leader of one country will make a statement in a video message that will provoke interethnic, racial or religious conflict, but the statement itself turns out to be fake. Examples of realistic deep fakes with political leaders (V.V. Putin, A.G. Lukoshenko, B. Obama, etc.) can be found in the public domain on the YouTube platform at the request - DeepFake.

Most often, GPU refers to the falsification of information about events using generative adversarial networks (GANs). The first network, or generator, deals with the creation of fake data such as photographs, audio recordings, or video footage from an original realistic dataset. The second network, or discriminator, deals with the identification and verification of the received data. Based on the results of each iteration, the generator network is tuned to create increasingly realistic imitations. The networks continue to compete, often for thousands or millions of iterations, until the generator improves its performance so that the discriminator can no longer distinguish real data from fake data.

The foundation of machine computing is High-Capability Computing Infrastructure and Applications (HCIA). This infrastructure is the pivot of the new digital space. Her scope of distribution from lab computing, artificial intelligence, Internet of things to media entertainment.

A significant part of these calculations is also required in the military sphere for the organization of encryption and communication, control of unmanned vehicles, ensuring cybersecurity, attacks, etc. The program of research and development in the field of network and information technologies launched by the US Congress in 1991 (Networking and Information Technology Research and Development Program - NITRD), this program aims to coordinate high-tech activities to ensure the US technological leadership in the world. Almost a fifth NITRD's 2021 Congress Budget Request the year is High Performance Machine Architecture (HCIA).

The problem is that a wide range of stakeholders have gained access to high-performance machine systems via cloud computing and at a similar rate. And these are not only students and professors of technical universities, but also persons interested in creating a state enterprise for purposes far from ethical standards, such as political pressure or blackmail, including at the interstate level.

And here it is important not to fall into the pre-set traps of disinformation and falsifications associated with the conduct of information wars.

Today, the term GP can be used as a tool in the course of this war to discredit (conceal, level, disavow) facts (information) that contribute to the negative image of the United States and its allies in the eyes of the world community.

Correctors of the truth

Conceiving deep forgeries as a complex and resource-intensive machine learning process, GP has turned from a technical tool into a cheap (from a resource and cost point of view) manipulative tool for verifying events in the world space. More precisely, information about data and events in the localized communicative media space.

Since most of the world's inhabitants do not have the opportunity to directly observe certain events, then, according to the journalist and political observer U. Lippmann, we receive information about them from third parties, most often interested and engaged. It is no coincidence that the Twitter platform's slogan until recently was: "When something happens in the world, it happens on Twitter." Approaching this idea from the standpoint of constructivism, one can cite the statement of the German constructivist sociologist N. Luhmann that if the world around us cannot be displayed as it is, then one should look for those who would benefit from reflecting the realities of the world on their own terms and in their own interests.

Thus, using DP as an appropriate pretext (it does not matter whether the falsification was in fact or not), the interested party applies the degree of interpretation that is favorable to it - from partial leveling of information to complete denial of the event as such. As, for example, in the case of the tragic incident with the participation of NATO troops during the transfer of forces in the Baltic countries. Let me remind you that one of the military vehicles of the convoy hit a cyclist in Lithuania. And when eyewitnesses began to upload materials to the network, the NATO leadership, represented by the United States, stated that these the photo is fake and there was no incident at all.

American experts (Professors Daniel Keats Citron and Robert Chesney) introduce the concept of "liar's dividend." It includes the notion that people can successfully deny the authenticity of information, especially if it portrays inappropriate or criminal behavior by claiming to be DPO. Simply put, if this or that information is not beneficial to me, then I say that it is a fake. And if it is beneficial to me, but not beneficial to the enemy, then I say that you simply do not want to admit the truth and for this you can be sanctioned. Do not confuse this technique with "double standards", as, rather, the "dialectical law of an overbearing observer" who chooses ways to make a decision in his favor.

In this context, another field for activity appears. Namely: the deliberate use of the State Enterprise as a pretext for carrying out illegal, provocative and inhuman actions. For example, in the Deep Fakes and National Security report presented to Congress, North American intelligence officials argue that deep falsifications can also be used to create incendiary content. For example, creating compelling videos of US military personnel involved in war crimes for purposes of radicalizing local populations, recruiting terrorists, or inciting violence against those military personnel (see footnote 1). From a practical point of view, such a technique is understandable and is used as counter-propaganda to discredit the enemy's information space and can be carried out even before illegal operations are carried out on the territory of third countries.

It so happened that we tend to trust the sources that we have chosen for ourselves or think we have chosen. The GP plays on our weaknesses and passivity. One of the main thoughts of Descartes, which was remembered by mankind: "I doubt - it means I exist." To paraphrase this statement in a modern way: "I doubt - it means I do not exist."

This transformation comes from the fact that if I doubt the information provided to me, coming from an ideologically "correct" source, then I have no place in the system of worldviews that dominates this social system or, more simply, the country. The 2020 US elections are a notable example. Those who doubt J. Biden's victory and insist on falsifying data are censored on the Internet, they are illegally fired from their jobs, and they are denied service to their bank accounts. In a country where you can buy gasoline or pay for utilities only with a bank card, this practice has disastrous consequences for a person's living conditions.

A new round of censorship

In the end, it can be said that the peculiarity of such methods as deep forgery is that the GP can generate reality in the absence of any connection with reality as such. Causing a number of consequences disastrous for this reality.

First, as we said above, the GP can exert serious pressure on decision-making by the leaders of our communities because of the “nebulousness”, “falsity” of the unfolding events. The whole question is where they unfold - on the pages of "The Guardian", "Twitter" or do they actually take place?

The second conclusion follows from the previous one. GP is a self-referential, that is, addressing itself, system of information censorship and propaganda. Since what is claimed to be "truth" or knowledge of the truth will be further used as a unified platform for the "correct" democratic point of view. Accusations of Russian interference in the US elections in 2016 have become a commonplace that does not require confirmation or explanation for the Western man in the street.

And thirdly, as a result of the development of methods of deep falsifications, an adult or a teenager, a layman, will not be able to distinguish between truth and lie. This uncertainty will exert serious psychological pressure on the individual, group and community, causing feelings of anxiety, danger, hopelessness, depression and anomie (lawlessness).

So who, after all, gets the dividends - the benefit from the conscious impact on the surrounding real world with the help of fakes distributed in the media?

Subscribe to our Telegram channel, daily additional materials that do not get on the site:

Dear reader, to leave comments on the publication, you must sign in.
  1. Lech from Android.
    Lech from Android. 12 May 2021 18: 26
    Mdaaaa what Eugene raised a burning topic, monitoring the Internet, I constantly come across there with what the author described. Sometimes it is impossible to identify a fake on the fly, sometimes a photo is a video. Therefore, you have to pay attention to the resource in which it is posted.
    The virtual world is sometimes difficult to separate from the real ... or vice versa. Joker students often upload their work, swearing at them with the last words for their compiled videos.
    1. astepanov
      astepanov 13 May 2021 15: 16
      Strong article. Very convincingly written. Orwell's world is getting closer.
  2. knn54
    knn54 12 May 2021 18: 32
    And after all, there is practically no protection. You can simulate not only the image, but also the sound.
    "Whoever controls the data controls the future" (Mark Zuckerberg, founder of Facebook).
    PS I have no doubt that this technology will become available in the foreseeable future, like the "masks" in Instagram today.
    And then I don't even want to think ...
  3. nnm
    nnm 12 May 2021 18: 39
    How Trump got a job at RT
  4. Maks1995
    Maks1995 12 May 2021 18: 52
    GP is a future problem, yes.

    But it grows out of modern lies, which are carefully cultivated and which no one fights against.
    Social networks are full of fakes, bloggers often lie, and know that they are lying, endlessly incriminating central TV channels and official authorities in lying.

    But! Laws only work in one direction. Protection of power. The authorities can promise and lie whatever they want.

    So what did you want? And there will be a GP, and there will be a GP, and there will be other Strong potatoes and nuts too.
    If the authorities can, then the rest also want to, and can, and have to.
    To lie
    1. Mikhail m
      Mikhail m 12 May 2021 19: 22
      Quote: Max1995
      If the authorities can, then the rest also want to, and can, and have to.
      To lie

      The authorities have orders of magnitude more opportunities for manipulation. a good example is statistics.
      1. Maks1995
        Maks1995 12 May 2021 23: 55
        Right. A blogger, as an example, has one site.
        The authorities have thousands.

        EDRA, as an example, already registers forcibly civil servants for the nomination of EDRA deputies and voting.
        Remotely naturally.
        So that 80% and 70 and 60% of the votes are already in your pocket in advance
        1. Vladimir_2U
          Vladimir_2U 13 May 2021 03: 28
          Quote: Max1995
          EDRA, as an example, already registers forcibly civil servants for the nomination of EDRA deputies and voting.

          Despite the fact that many EdRosy already at the last elections in the propaganda materials about themselves loved ones, they did not mention at all that they were EdRosy. Moreover, some even asked directly in leaflets, etc. COLORS white, red and blue not to be used, if possible.
          1. Maks1995
            Maks1995 13 May 2021 09: 04
            This is well known. And removed too,

            But Old Man made an awesome ad. 40% remotely - and all for dad .... Half of the country, count, at once!

            Here PZhV was delighted ...
  5. Avior
    Avior 12 May 2021 18: 55
    craft technology is growing
    before, people used to believe every written word and "anonymous letters" were in use, then came the era of fake photos, fake audio recordings, fake photocopies, now there will be an era of fake videos.
    You need to get used to living with this and develop critical thinking in yourself.
  6. Knell wardenheart
    Knell wardenheart 12 May 2021 21: 33
    In fairness, almost everything described existed before DeepFake. You can find a similar person from a certain angle, you can create a very similar timbre of voice - you can artificially degrade the quality and then appeal to the fact that the quality is bad, etc. All this since the 90s has been occasionally encountered when "a person who looks like the Attorney General" or something similar pops up somewhere and creates a scandal - and it doesn't matter whether it was a subject or not. The rumor has gone and people appear who, for one reason or another, want to ride the wave of hype - like "witnesses" who will willingly say to the camera that they saw the subject in the company of girls of easy virtue buying weed or that he knocked down someone and then paid off, etc., etc. In our society, any information thrown in in time can create an "air castle" that will take on a life of its own - the quality of such a "throw-in" is often practically unimportant, the potential readiness of the environment into which it is being thrown is important. But the environment can be prepared, also through events .. Such preparation can be completely invisible - after it the stuffing "goes" much better and spreads much more agile.
    At the moment, DeepFake technology is still quite primitive because the human eye can recognize even minor machining defects quite easily. Much worse in themselves are such "deep throws" on fertile soil - even if they are technically weak, it can rock the boat great. There is only one remedy here - to increase confidence in the State Media. Which does not even smell.
    1. Ka-52
      Ka-52 13 May 2021 06: 30
      At the moment, DeepFake technology is still quite primitive because the human eye is quite easy to recognize even minor machining defects.

      it's not about the authenticity of the message. There is no need to form a perfect fake. the question is who the counterfeit is aimed at. Human thinking for the most part is subject to one weakness - the lack of critical thinking. In such a situation, a person simply does not notice the roughness. Try to explain to the Ukrainians that no Russian army is planning to seize Kiev - they will not believe it. And no logical arguments will be accepted. Since denial relies on the protective function of the brain.
    2. Normal ok
      Normal ok 13 May 2021 12: 18
      Many people are looking in the media / internet not for the truth, but for confirmation of the correctness of their own ideas. And, they find channels / sites where their ideas are indulged. There is nothing new here: "you don't need a knife, you just sing along to it and do with it what you like." For them, the quality of fakes is not the main thing, they will believe so. A prime example is Meehan. He is not interested in what is in reality. He is interested in what he invented for himself.
      1. Knell wardenheart
        Knell wardenheart 13 May 2021 12: 39
        The state itself fosters tolerance to uncritical stuffing - the lion's share of propaganda is built on this (and, by the way, on xenophobia, too). So in the end it all comes down to a competition of musicians - who was more skilled - the state forming propaganda or the state destroying (or simply having its own specific goals). Having got used to regularly not receiving important or simply interesting answers to their questions, the population ceases to ask them or consider them IMPORTANT, either treating everything neutrally and indifferently, or taking everything voiced on faith of varying degrees of depth. The fight against fakes will not be effective as long as they exist within a larger system that operates, replaces and processes information. In such a system, the most skilled liar — or the largest group of capable ones — will always win.
  7. nikvic46
    nikvic46 13 May 2021 05: 49
    I don’t want to know how this is done. But I have no doubt that this is being done at the highest level. The advertising video with Miloslavsky from Sberbank confirms this. True, it was removed afterwards. Still, it is not appropriate to advertise a burglar under the Bank's brand name. There is an internal protest, and this is not some kind of self-awareness, but most likely the law of self-preservation.
  8. Conjurer
    Conjurer 13 May 2021 13: 19
    What the author writes about has existed for as long as humanity exists - a lie and attempts to pass it off as truth.
    There is another way to create reliable lies, but so what? Why is there such a panic? There will definitely be a way to recognize it and the struggle between lies and truth (by analogy, sword and armor) will continue for centuries))))
    1. agond
      agond 13 May 2021 22: 23
      There is protection against unauthorized substitution, editing of electronic documents, audio and video, it remains only to adopt a law on mandatory such protection of video appeals of the highest officials of the state