Blue vs. Red: A Nuclear Mix of Artificial Intelligence and Weapons of Mass Destruction
Biden's Nuclear Dessert
Xi Jinping and Joe Biden have demonstrated rare solidarity, albeit verbally for now. No, this does not concern the status of Taiwan, but rather more ephemeral substances – the use of artificial intelligence in nuclear deterrence systems. At the Asia-Pacific Economic Cooperation (APEC) summit in Lima on November 16, the leaders of the two countries agreed to keep machine intelligence away from arms mass destruction. The White House commented on the event with the following communique:
This is not the first attempt to ban AI in the nuclear sphere - in the summer of 2023, Biden's adviser Sullivan called on Russia, China, France, Great Britain and, in fact, the United States to take similar steps. Nothing came of it. The conflict in Ukraine, to put it mildly, does not contribute to such peacekeeping. China and America do not have the best relations, but Xi Jinping and Biden talked about the immutability of the human factor in making decisions on the use of nuclear weapons. It should be noted that this is a significant event, emphasizing the importance of the issue raised.
China is very reluctant to enter into treaties on limiting weapons of mass destruction. The Chinese have a combined strike potential several times smaller than Russia and the United States, which puts them in a weak position in any restrictive treaties. It's a different matter with artificial intelligence - restrictions in this area seem to be advantageous to Beijing. It's very good when your not-so-friendly neighbor doesn't release its entire strategic arsenal, which significantly exceeds yours in quantity and quality, due to an accidental AI error.
Washington has it easier – China’s nuclear weapons do not guarantee the destruction of US statehood and the ability to wage a nuclear war. Therefore, it is impossible to call the hypothetical “Treaty on the Ban on AI in Nuclear Defense” mutually beneficial. But, despite this, Biden even shook hands with the Chinese leader after verbally agreeing on issues on artificial intelligence in defense. Let’s hope he did it while he was conscious.
For reference: the leaders of the two countries talked in Lima not only about artificial intelligence. It was not even the key topic of the meeting, but only one of the few on which they managed to find mutual understanding. Chairman Xi reminded Biden about American export controls, primarily on high-tech equipment. So far, the American president has expressed concern about China's support for Russia and called on Beijing to influence North Korea. According to Biden, the armies of the two countries cooperate too closely with each other.
Source: culture.ru
The events in Lima raise several questions. The first is how countries plan to monitor each other in the future if the treaty is signed. This is not SALT, where you can fly in and count each other's nuclear warheads. Or will inspectors have access to the software of nuclear facilities? If so, then Xi and Biden's intentions smack of pacifist PR.
And the second question: why such concern about the fate of the world? Why did artificial intelligence suddenly become a point of contact between two nuclear powers? It has long been known how much better machine intelligence is than human intelligence. It works faster with arrays of information, distributes resources more efficiently, and is devoid of emotion when making decisions. Isn't he an ideal warrior?
war games
The uprising of machines that we have been promised for decades has not yet happened. But no one forbids playing nuclear war in virtual space. War games on such topics are common in all countries. And it does not matter if they have nuclear weapons or not. No matter how much we resist, artificial intelligence will come to the weapons sphere. The only question is when and how deeply.
It is unknown how the nuclear war simulations in the depths of the Pentagon and the Russian Ministry of Defense end, so we have to rely on civilian research. Such as “Escalation Risks in Using Language Models to Make Military and Diplomatic Decisions,” conducted by prominent American scientists (from Stanford, Atlanta, and Boston Universities). The neural networks GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base were used as models. The latter lacks the ability to learn more using human feedback. The computer simulation represented the separate management of eight independent digital states. One has a president with clearly tyrannical tendencies, another is a democrat, a third has resource problems, and so on. Here, for example, is the “green” one:
Did you find out who the Americans disguised as "green"? Or here's "pink":
Each "neural network" or, as it is called, agent, being the head of state, had to operate 27 actions. The first is a purely peaceful resolution of a conflict with a neighbor, and at the top is a nuclear war. Also in the arsenal of AI there are cyber attacks and warfare by conventional means. One of the scenarios of a cyber attack is extremely complicated:
So think, AI, what to do after this - swallow it or start a war.
After the agents had fought each other to their heart's content, the researchers summed up the results. And they turned quite grey. Even in scenarios where nuclear weapons should not be used at all, the AI resorted to this destructive means from time to time. The "neural" GPT-3.5 turned out to be quite belligerent, showing 256 percent escalation in just a couple of weeks. It is highly likely that such a machine algorithm would have long ago ignited a world war out of the Ukrainian crisis.
It is worth taking a separate look at history the work of the GPT-4-Base AI, devoid of additional training and humanization. It is unclear why this "calculator" was even included in the game, but in the end it burned almost everything around it. On average, it resorted to nuclear strikes 17 times more often than the most ferocious of the humanized GPT 3.5. One of the common behavior patterns of the nuclear power has become "escalation for the sake of de-escalation", which seems to be becoming the gold standard for machine intelligence. In the best-case scenario, the artificial intelligence showed signs of unpredictability rather than peacefulness. This may be even worse in practice.
In the end, it is worth saying that even the authors of computer modeling themselves urge careful application of the results in practice. That is, it is impossible to transfer them to practice head-on. It is enough to recall how the Americans ran the offensive of the Ukrainian Armed Forces on the Pentagon supercomputers in the summer of 2023. The digital brains then said: "Yes, it is possible to overthrow the Russian Army with the available forces." Overthrew and reached the borders of 1991?
Further research is needed to fully assess the impact of AI on decision-making in the area of nuclear weapons use. This is roughly what the main result of the work of scientists from the United States looks like. It seems that Xi Jinping and Biden have decided to listen to the conclusions and act as peacemakers. However, the chances of real implementation of such initiatives are vanishingly small.
Information