Artificial Intelligence. The future of Russia's national security?
Ten years of development
It's no secret that artificial intelligence is penetrating deeper into the lives of ordinary people around the world. This is facilitated by the global spread of the Internet, as well as a serious increase in computing power. Neural networks, which have a certain similarity with the human brain, have made it possible to qualitatively improve the work of the software being developed. There are, however, a couple of clarifying points: neural networks are still very far from the level of the human brain, especially in terms of energy efficiency, and the operation algorithms are still extremely difficult to understand.
Money in the artificial intelligence industry, despite some limitations and resonant incidents with autopilot cars, goes a wide river. Last year, according to the approved National Strategy, the market for IT solutions in this area exceeded 21,5 billion. God knows how much, but it will only increase every year, and by 2024, the total AI in the world will be conditionally worth 140 billion, and the potential economic growth from the introduction of AI by this time will reach a decent 1 trillion. dollars. Actually, an attempt to keep up with world trends was the approval by President Vladimir Putin of the October 10 of October 2019 of the aforementioned National Strategy. At the same time, the program itself declares not just reducing the gap with world leaders, but entering the number of top players in this market. And it is planned to do this by the 2030 year. Among the obvious obstacles to this path will be protectionist statements by a number of countries that any Russian software carries a potential danger.
Where are they going to realize the “limitless” capabilities of AI on Russian soil? First of all, it is the automation of routine operations along with the replacement of people in hazardous industries (read: including in the army). Further, serious work is planned with big data that has been recently generated simply like an avalanche. It is assumed that they will be able to improve forecasts for managerial decisions, as well as optimize the selection and training of personnel. Health care with education in ten years will also be active AI users. In medicine, prophylaxis, diagnostics, dosage of drugs and even surgical intervention will be partially or completely given to the machine mind. In schools, AI will be involved in individualizing learning processes, analyzing a child’s propensity for professional activity and early identification of talented youth. In the strategy, one can find a provision on the "development and implementation of educational modules in the framework of educational programs at all levels of education." That is, the basics of AI will be taught at school?
As usual, in addition to the tangible results of the development of AI, the scientific environment will be required to increase the number and citation index of articles by Russian scientists in world specialized publications. And by the 2024 year, that is, very soon, the number of citizens with competencies in the field of AI should increase in Russia. Including this will be realized by attracting domestic specialists from abroad, as well as attracting foreign citizens to work on this topic in Russia.
However, AI has one controversial quality, which the strategy is supposed to solve "by developing ethical rules for human interaction with artificial intelligence." It turns out that the cold calculation of computer intelligence forces it to make biased and unfair generalizations.
AI bias
Among the mass of questions regarding the functioning of modern AI systems, the currently imperfect algorithms for autopiloting wheeled vehicles, which still do not allow legislative permission for their widespread use, are highlighted. Most likely, in the foreseeable future, we will not see cars driven by AI on our roads. We have unsuitable road conditions for this, and the climate does not favor using the autopilot all year round: mud and snow will quickly “dazzle” the sensor systems of the most advanced Robot. In addition, the massive introduction of AI will inevitably take away the work of millions of people around the world - they will either have to retrain or spend the rest of their days in idleness. It’s fair to say that various newfangled “Atlases of the Professions of the Future” sometimes carry outright nonsense: in one of them, dated 2015, by the new year 2020, for example, the professions of an accountant, librarian, proofreader and tester should have become obsolete. But, nevertheless, the profile of most professions will change, and the negative factor of AI will prevail here. In any case, the prospects for the further introduction of AI into society raise many questions for state regulators. And how to solve them, it seems, few people know.
Another issue that is already looming on the horizon is AI bias in decision making. The Americans were one of the first to encounter this when they introduced the COMPAS system in 15 states to predict cases of relapse. And everything seemed to start very well: it was possible to develop an algorithm that, based on the mass of data (Big Data), was able to formulate recommendations on the severity of punishment, the regime of correctional institutions or early release. The programmers rightly argued that in the pre-dinner time a hungry judge could endure an excessively harsh punishment, and a well-fed one, on the contrary, would be too lenient. The AI should add a cold calculation to this procedure. But it turned out that COMPAS and all similar programs were racist: AI was twice as often mistakenly accusing African Americans of a tendency to relapse than white people (45% vs 23%). AI generally regards fair-skinned criminals as people with a low level of risk, since they are statistically less likely to violate the law - therefore, their forecasts are more optimistic. In this regard, in the United States, voices are heard more and more often about the abolition of AI in resolving issues of bail, sentencing and early release. At the same time, US justice has nothing to do with the program code of these systems - everything is purchased from third-party developers. The software systems that work on the streets of many cities around the world, Predpol, HunchLab and Series Finder, have already statistically reliably proven their effectiveness: crime is declining, but they are not without racial prejudice. The most interesting thing is that we do not know what other “cockroaches” are sewn into the artificial brains of these systems, since many analysis parameters are classified. There are also doubts that the developers themselves understand how the AI makes certain decisions, which parameters it considers key. Similar situations are developing not only in law enforcement and justice, but also in recruiting agencies. In most cases, AI prefers hiring young men, leaving aside the weaker sex and age candidates. It is funny that the values of the West, which they so eagerly propagate there (equality of the sexes and races), are violated by the latest Western achievement - artificial intelligence.
The conclusion from a short digression into the theory and practice of AI is as follows. It’s one thing when our data from social networks and other sources are massively processed for the purpose of marketing or political manipulation, and another thing when the sword of justice is passed into the hands of the AI or, worse, an arsenal of national security. The price of a biased decision increases many times, and something needs to be done. Whoever succeeds will become the real ruler of the 21st century.
Information