Unlike remote control tanks and tankettes, torpedo boats that were tested in the Soviet Union in the 1930s, fully autonomous combat robots, activists say, will have a number of inherent flaws that can make them deadly for humanity. The most common fully autonomous systems from the past, such as the Phalanx, which shot down aircraft or missiles approaching American warships, had a rather narrow specialization, their own niche, and could hardly work due to any random factors or the civilian population.
But the new generation of combat robots, for example, the Harpy (“Harpy”) completely autonomous UAV from Israel, is much more dangerous. So far, the Harpy is looking for and destroying only radar in a fully automatic mode, firing missiles at them in the “shot-and-forget” mode. However, such UAVs are attractive for the armies of the whole world and not only because they cannot be hacked and put on their own airfield, as the Iranian military allegedly arrived in 2012 with an American drone.
The most important thing here is the low cost of operating combat robots. So, for example, a remote-controlled drone needs not only an expensive command and control station, in which there is a well-trained specialist working in shifts, but also jam-proof communication. In fact, in the case of small-sized UAVs, this becomes the main expenditure item. The elimination of this item of expenditure will allow countries to produce flying combat robots in thousands of series without a significant increase in the cost of their operation.
"The campaign for the prohibition of killing robots," believes that the main problem is that such machines are still imperfect. They are not able to distinguish a civilian from a terrorist, a person of one nationality from another (although in fairness it must be said that people also sin with this). At the same time, in the modern world, wars are often fought in places where just one random shot can lead to an escalation of violence. The organization is particularly concerned about the development of South Korea, which creates a special robot to patrol the border with North Korea. One mistake of such a robot can lead to a serious armed conflict. Therefore, a reasonable question arises: whether to transfer the cause of war and peace to the court of software, the adequacy and reliability of which cannot be verified without human victims.
At the same time, there are more serious questions in the world than the Korean conflict. The case of the autumn 1983 of the year, when the Soviet automatic warning system for nuclear attack “Oko” issued a series of false signals about the launch of US ballistic missiles, was widely known. Only the intervention of the operational duty officer on the Serpukhov-15 command post prevented a “retaliatory” strike. After analyzing the "launches" of the ICBM (several in a row from one point), the lieutenant colonel thought that the potential enemy was not so stupid as to start a war and be substituted for a retaliatory strike by the non-suppressed Soviet nuclear forces. How could this situation end if the “Eye” system was completely autonomous?
Here we talked about a possible nuclear war, but there are also much more prosaic examples. For example, in 2007, a tragic incident occurred in the army of South Africa, 9 soldiers were killed, and 14 people received various injuries. The culprit of the tragedy was the automatic anti-aircraft gun of the Swiss-German production Oerlikon GDF-005. This gun is equipped with active and passive radars, a laser target designation system and can fire at fast low-flying targets, such as helicopters, airplanes, cruise missiles and UAVs. In automatic mode, two 35-mm rapid-fire guns are used.
During the exercise, this installation failed several times until it was decided to fix it manually with a cable and metal fasteners. But at some point, the fasteners could not stand it, and the installation barrels began to send out half-kilogram shells to the right and left. The gun was silenced, only having spent all the ammunition - 500 shells. Then Brigadier General Quen Mangop, a representative of the South African army, said that the reasons for the failure remained unknown. According to him, perhaps the problem could have a mechanical nature. However, a number of experts pointed to a computer malfunction, in this case it is not possible to establish the cause of the tragedy.
All this looks even more depressing against the background of the increasingly emerging reports about the creation of the next combat robot. Not long ago, the United States Air Force conducted test flights of an X-47B drone drone taking off from the deck of an aircraft carrier and capable of performing a combat mission without human assistance. At the same time, the Patriot-type air defense missile systems have been around for quite a while, which are able to recognize the target and open fire completely in automatic mode. To create a fully autonomous combat vehicle, it remains only to take a few steps. Such robots can take on many human functions, forever changing our understanding of warfare.
Currently, the laboratory of the Institute of Technology in Atlanta, Professor Henrik Christensen, is testing a robot that is designed to find insurgents who operate by guerrilla methods. These studies are funded by the well-known defense corporation BAE. The main goal of this project is to create a robot that is capable of conducting a study of the terrain on which the enemy has taken shelter, put the locations of its possible location on the map and collect other information that would help in planning military operations. Such robots do not carry any weapons, their main goal is to collect intelligence information.
Future military technology expert Pete Singer, who works at the Brookings Institution in Washington, believes that the emergence of combat robots on the battlefield will raise many fundamental questions. Periodically in stories military equipment there comes a time when there is a thing that leads to a complete change in the situation, the expert notes. For example, it was already with the invention of gunpowder, the appearance of a machine gun, a nuclear weapons, computers. Fighting robots can also be a revolutionary technology. However, their appearance does not mean that everything will change - from combat tactics to questions of law, ethics and politics.
American Jody Williams, who won the Nobel Peace Prize in 1997 for organizing a campaign in favor of banning anti-personnel mines, believes that the fighting robots that are being created can eventually become a deadly weapon. According to her, such neutral terms for the human ear as “autonomous combat systems” are quite deceptive. According to her, it is more logical to call them killer robots, since killing people is their main task.
At the same time, Ronald Arkin, a professor at the Institute of Technology in Atlanta, thinks otherwise. Arkin is the author of the concept of the combat system, which is controlled by the so-called ethical controller. Such combat robots are programmed to follow the rules of engagement and the principles of international law. According to Ronald Arkin, everyone shouts and horrified: “Robots are villains, robots are killers!”. But at the present time, people acting on the battlefield do terrible things. Rigidity was the companion of all wars on the planet, the professor said. Arkin believes that the use of technical means will reduce the number of losses among the civilian population, which is in the conflict zone.
Currently, in addition to the United States, about 76 countries around the world have their own programs to create combat robots, the scientist said. Nowadays, for a couple of hundred dollars, it is already possible to buy a UAV, which 2 had already been classified a year ago. Such technologies are spreading fairly quickly and on a global scale. An example of this is the use of UAVs, which are used for delivering pinpoint strikes on previously selected objects, including people. Currently, the use of unmanned percussion apparatus in Afghanistan and Pakistan is already causing debate in the global community. With the proliferation of combat robots, such debates will inevitably turn into the area of ethical principles of their use.
So maybe fighting robots are not needed at all? What are they produced for? The fact is that with the advent of mass armies, the effectiveness of the actions of an individual soldier plummeted. The soldiers of some 5-East Siberian regiment during the Russian-Japanese war, holding Jingzhou, hit the enemy about 1 times from several dozen rifle shots. At the same time, already in the First and Second World Wars, the average number of shots per hit rose to 10 000 - 50 000. If it is quite simple - most of the soldiers in the mass armies simply did not know how to shoot, and more than 95% of the commanders of the major armies of the world never even saw their subordinates use the sights on their rifles.
A similar situation began to be observed in artillery and other types of troops. So on the Eastern Front for every killed Wehrmacht soldier of the USSR spent about 100 artillery shells and mines. With the same "efficiency" the American military personnel disposed of their ammunition during World War II and during the Vietnam War. The numerical growth of soldiers and the rapid progress of military technology in the twentieth century was accompanied by a decrease in the training of those who were trusted with this weapon.
At the same time, autonomous combat robots are already shooting as well as software, weather and weapons allow. And this means that their participation in hostilities, after the completion of software development, will lead to very large losses of personnel of the side that will be deprived of such robots. To present the outcome of this confrontation is easy enough. If now the armies of Western countries are not able to stay in Iraq or Afghanistan for a long time, since their political leadership will miserably leave their posts with significant combat losses, then after the introduction of combat robots, the stay of the occupation troops in various regions of the world will practically unlimited. The victims of countries whose armies are equipped with such robots will almost completely cease to be combat, they will be comparable only with the number of corpses left by the terrorist attacks - the only weapon left in the hands of militants.