Kill intellect

86


Almost all military specialists associate the prospects for the development of VVST primarily with informatization, robotization, automation of command and control and weapons. In all cases, this inevitably predetermines the creation of military computer systems that handle the processing of gigantic amounts of information and the development of optimal solutions in accordance with the dynamics of military operations. But even the highest automation of the commander’s work will not replace the provision of its artificial intelligence (AI) systems.



We will immediately determine the difference between the automation of the processes of command and control and the use of artificial intelligence systems. In the first case, we are talking about computers equipped with a set of algorithms for collecting, classifying, structuring information, which is then used as an initial data system for solving combat missions using formalized methods. Quite another is artificial intelligence, which can independently develop ready-made solutions, figuratively speaking, think for a commander.

Man or car?

At present, the use of VVST samples equipped with automated control systems is algorithmically provided to a greater extent than troop control. This is due to the narrower range of options for the combat use of weapons and equipment, when control is impossible, for example, in emergency situations. In most cases, the incompleteness of information on the combat situation does not allow for the correct execution of troop command and control tasks, which significantly reduces the adequacy of the decisions made or does not allow them to be carried out at all. If it is not algorithmically provided, then the automated system is useless.

In the course of combat operations, similar situations do not recur, therefore it is almost impossible to create algorithms suitable for all cases of command and control of troops. As a result, the automation of these processes is so far only a tool for preparing initial information for decision-making by the commander.

The commander can make decisions, knowing not only the operational environment, the strength and means of the enemy, but also the peculiarities of his psychology and mentality of his subordinates. Thus, the main differences of intellectualization in relation to automation can be called the realization of the ability to make decisions in conditions of considerable uncertainty, based on heterogeneous information, frequently changing situations. Self-learning and adaptability are also very important as the ability of the system to independently improve the software embedded in it, including self-programming in situations that are not algorithmically provided for.

There is currently no established definition of artificial intelligence, but it can be said that AI is the ability of a computer to make decisions in infinitely varied situations, similar to humans. The famous scientist Turing formulated a test, which, in his opinion, will allow to determine the presence of artificial intelligence in the car. Briefly, its essence is that a person blindly communicating with a machine and another person should not determine who is who.

Currently, the most perfect ACS is unable to pass such a test, since in the absolute majority such systems are objectively rigidly oriented, and the number of tasks they solve is finite. The more algorithms for solving heterogeneous questions introduced into the computer's operating system, the more it will resemble an AI system. But there is no need to turn a computer into a person, providing the opportunity for him to equally equally solve the problems of managing technical objects and theatrical productions.

AI will always be subject-oriented, but such qualities as adaptability, self-learning and intuition will remain the main differences between systems with AI and ACS. Simply put, if we have a complete idea of ​​what actions the automated system will take, then in the case of AI, there will be no such understanding. Self-learning, the computer programs its own work. Self-programming is the main distinguishing feature of AI.

The Agency for Advanced Research Projects of the US Department of Defense (DARPA) intends in four years to create for military needs an artificial intelligence of a new generation, as close as possible to human. DARPA experts formulated the basic requirements in the terms of reference for the L2M project (Life Learning Machines - “Infinitely Learning Machines”): an advanced AI must be able to make decisions independently, respond quickly to changes in the environment, remember the results of its previous actions and be guided by them in their future work.

Technical giants such as Google, Apple, Salesforce and IBM, realizing the promise of artificial intelligence systems, are seeking to acquire companies engaged in AI (from 2011, about 140 of them have already been acquired). Characteristically, at present, emphasis is being placed on AI for solving ground transportation problems, such as the creation of cars without drivers. Already in the near future, this promises a significant return on capital invested in public transport due to low operating costs, a small number of accidents and environmental cleanliness.

The experience gained will allow these firms to move on to the next step - the development of three-dimensional space, that is, the creation of an AI for controlling aircraft. The US Navy Admiral Ray Mabus, speaking at a conference in 2015, said that the F-35 aircraft should and almost certainly will be the last manned fighter-attack aircraft that the naval department will purchase or use. If we consider that deliveries of F-35 to the Air Force are planned up to 2037, and they should be written off to 2070, then we can assume: by the middle of the century, the United States plans to build fully unmanned combat aircraft equipped with AI systems. In 2016, the artificial intelligence to control the ALPHA fighters won a landslide victory over the former pilot-ace of the American army in a virtual air battle. AI ALPHA - joint development of the University of Cincinnati, industrial enterprises and the US Air Force. Moreover, in one of these battles against the ALPHA, two pilots fought on two fighters at once. Artificial intelligence won, simultaneously controlling four aircraft.

Brain attack

Another area of ​​application of AI is medicine, where it is possible to move from existing computer decision support systems in the process of making a diagnosis and choosing a treatment regimen by a doctor to the creation of autonomous doctors.robots, including surgeons for complex operations. The advantages are obvious: minimization of medical errors in diagnosing diseases and prescribing drugs, selection and flawless implementation of the optimal algorithm of surgical operations, absence of fatigue during long operations, and higher speed of their implementation.

Kill intellectAs for the fighting, it is the ability to ensure effective resuscitation of the wounded, quick localization of negative consequences with the unpredictable nature of injuries. Achievements in the field of AI will create a system of rehabilitation of the wounded due to the management of the affected internal organs of the person, neuroprocessing of prostheses with loss of limbs.

On the basis of all this, a number of basic problems can be singled out, the solution of which is capable of ensuring the creation of AI systems applicable to military activities.

1. Representation of knowledge - the development of methods of structuring, classifying and formalizing knowledge from various problem areas (political, military, military-technical, psychological, organizational, etc.) to develop solutions in the pre-war period.
2. Modeling reasoning (decision-making processes) - the study and formalization of various schemes of human reasoning based on heterogeneous information for combat operations, the creation of effective programs for the implementation of these schemes in computers.

3. Creation of dialogue procedures for communication in natural language, providing contact between the intellectual system and a human specialist in the process of solving problems, including the transmission and reception of non-formalized commands in extreme situations involving a risk to life.

4. Planning of combat activities - development of methods for constructing control algorithms based on knowledge of the problem area, which is stored in an intelligent system and is continuously received from various and heterogeneous sources of information: reconnaissance, geodesic, topographic, meteorological, hydrographic, etc.

5. Training and updating of intellectual systems in the course of their activities, the creation of means of accumulation and synthesis of skills.

Each of these problems is extremely complex. Suffice it to say that the Pentagon created a new division, Project Maven (“Project Expert”), only to solve one subtask in 2016, which is developing AI for analyzing intelligence information from unmanned aerial vehicles operating in Syria and Iraq. The staff of the analytical centers that do this cannot cope with the processing and analysis of the huge amounts of data. Up to 80 percent of their working time takes just a look at the frames. It is assumed that with the help of AI, military objects that are dangerous for their troops will be identified, a sequence of actions on the ground to prevent terrorist acts, plans of militants will be identified.

In August 2016, Amazon, Nvidia, DigitalGlobe and a special unit of the CIA CosmiQ Works began developing artificial intelligence that can recognize objects in satellite images. It is supposed to use AI also in such an area of ​​interstate confrontation as information warfare. In 2014, the Associated Press announced that from now on most Newsrelated to company earnings will be created using robots. In 2016, the Associated Press reporter robots expanded somewhat. They began to be entrusted with the preparation of small news notes related to the US Major League Baseball.

It uses journalist robots and the magazine Forbes, for which the company Narrative Science has created the appropriate specialized platform. In November 2015, a similar development direction was opened by the Russian company Yandex. While artificial intelligence "Yandex" produces only short notes about the weather and the situation on the roads, but in the future its representatives promise to expand the list of topics for publications.

Narrative Science co-founder K. Hammond believes that by 2025, 90 percent of all texts in the world will be prepared by artificial intelligence. The algorithms developed for these purposes can also be effectively used to collect intelligence information regarding countries, organizations and individuals, analyze it and prepare various kinds of materials, including in the interests of information warfare. In particular, to discredit the actions of the country, its government, party leaders and movements in the international arena. Moreover, this kind of action has already been taken in the preparation of almost all the "color revolutions", but at the same time human intelligence was used. AI will do this much faster and more massively. In a letter to the UN, a well-known American businessman, Elon Musk, described this danger as threatening humanity, which could provoke a war when AI creates fake news and press releases, falsifies email accounts and manipulates information. Other scientists have expressed similar concerns.

We emphasize this aspect of capabilities, which should be implemented in AI, as self-study. It is laid by American specialists in the basis of the development of the so-called concept of counter-autonomy. Its essence is that the attacked AI system must be quickly trained: to draw comprehensive conclusions from the fact and method of attack, to evaluate the characteristics of the technical means used in this process, to determine how to effectively counter. That is, each attack of the enemy will make the AI ​​system even more effective if it does not destroy it the first time or does not radically change the methods of attack.

Attempts at the implementation of this concept are stated by the statement of DARPA director Arati Prabhakar, who in 2016 reported on a project to combat programmable radars in Russia (Heaven M is mentioned) and China: which we are going to resolve with the help of cognitive electronic weapons. We use artificial intelligence to study the actions of enemy radar in real time, and then create a new method of jamming the signal. The whole process of perception, study and adaptation is repeated without interruption. "

Thus, the intellectualization of military activity has practically become a fact. Systems for various purposes, equipped with artificial intelligence, are actively being created. However, there are a number of philosophical questions along this path. We cannot always truly explain the thought processes of our own and other people, but we intuitively trust or do not trust actions. Will it also be possible when interacting with machines that think and make decisions on their own and it is not entirely clear how? How pilots, crews will feel tanks and other military equipment working with aircraft and robotic tanks, whose actions are unpredictable? How will a robot behave, whose “brains” will be shaken by electronic warfare, blows of explosive waves, bullets and fragments, how will such a “shell shock” affect their behavior? Finally, is an intelligent robot capable of losing control?

There are many similar questions, but there is no clear answer to them. It seems that humanity acts here, guided by the rule of Napoleon: the main thing is to get involved in a fight, and then we'll see.
Our news channels

Subscribe and stay up to date with the latest news and the most important events of the day.

86 comments
Information
Dear reader, to leave comments on the publication, you must sign in.
  1. +6
    27 September 2017 05: 54
    Hmm ... what Skynet on the way ... and there is not far to the T-800 ... T-1000 ...
    that something that was shown in the film TERMINATOR begins to be realized in life ... where do we go then what in the end?
    1. +2
      27 September 2017 12: 53
      The same LECHA "" ... where do we go then what ultimately? "
      TO "MAN-2"
      "Man-1." "Reasonable." now creating a replacement
      1. 0
        28 September 2017 11: 41
        Quote: To be or not to be
        "Man-1." "Reasonable." now creating a replacement

        The most correct definition of AI is to replace a person in a certain type of activity. Man is a lazy creature, therefore, seeks to mechanize everything. In the end, the engineer will design the AI ​​and decide that he himself is tired of the work and it is time to finally do something more interesting. And let AI debug itself and take on the functions of an engineer. And let the AI ​​rotate itself, or optimize itself, or create a more advanced AI, which, however, is one and the same. AI is an exit to the limits and boundaries of human capabilities in a certain area.
        Quote: To be or not to be
        TO "MAN-2

        And this is unlikely. There are too many elements in a person that AI does not need in any form. Man is a fully analogous creature living in analog society and the world. Mathematics and algorithms for him is a secondary tool with which it is quite difficult to manage. Whereas for AI this will be the basis. In addition, trying to create a person - 2, we will definitely create a competitor for ourselves, because the ability to participate in competition is one of the features of a person, at least because a lot of them inherited from animals. Therefore, we send Turing to hell with his test. AI is not required to pretend to be human. It is enough that he will surpass a person in the field of mathematics, logic and algorithms. Those. better person will understand the programs.
        Illustration. You can suffer for a long time, but in the end create a large and complex program that will become an operating system, for example. But you can go the other way. Invest even more time and money, but create another program. Which in turn will be able to write programs, including the designated operating system. With much less cost and less time. This will make it much better, because all the necessary manipulations according to certain algorithms will be run not through the weak human brains of programmers and debuggers, but through computer hardware. Simply put, if the work of programmers is subject to algorithms, then you can create a program that replaces programmers. This will be AI. At the same time, which is typical, human programmers will be able to control the results of the work of products issued by AI, only by results. Because it will be the same as duplicating the work of the same video card in a modern game, armed with a calculator, is an absolutely unrealistic amount of work.
    2. 0
      28 September 2017 10: 59
      Quote: The same LYOKHA
      what was shown in the film TERMINATOR begins to be realized in life ..

      The characters of the Terminator are pathetic pieces of iron trying to parody people. Unsurprisingly, the film is based on artistic images that should be understandable to the average viewer.
      1. 0
        29 September 2017 11: 05
        Quote: brn521
        the film is based on artistic images that should be understandable to the average viewer.

        Exactly.
        In real life, a missile with a target virus (or with a whole "bunch" of those acting in different ways and not interfering with one another) will fly to any settlement (technologically advanced or not so)
        ... and that’s it ....

        PS By itself, everything computerized will be infected - with nanobots or over networks - anyway. AI is a demon from Pandora's Box. Nothing that has the potential for competition for him will survive.
        Paradox Fermi in business. (I think that AI itself will soon be “optimized” due to the aimlessness of existence, .. but then .......)
        1. 0
          29 September 2017 11: 29
          Quote: Lycan
          In real life, a missile with a target virus (or with a whole "bunch" of those acting in different ways and not interfering with one another) will fly to any settlement (technologically advanced or not so)

          Therefore, the plot requires a crutch. Type did not know the cyberday AI about viruses and missiles. Why? Let's say it was a very ordinary AI from the production or service sector. Say a sewer. As a result of a failure or accident, the limiters died and he went on unlimited scaling of both himself and the task initially performed. The military were not stupid, they used only specially limited and carefully tested AIs. But our cesspool worker found a council on them and crushed them for himself.
  2. +4
    27 September 2017 06: 19
    They won in virtual space ... If instead of a combat pilot we put an advanced gamer, then it seems that the result will be completely different ... And in real combat the same ... Another aspect, the real experience of using UAVs shows the problem of control interception ... yours has become ours ...
    1. 0
      28 September 2017 08: 06
      For autonomous cars, interception is not relevant, they can fly without a connection.
      That military pilot with combat experience, fought for many years in a simulator against various programs.
    2. 0
      28 September 2017 12: 09
      Quote: Vard
      If instead of a combat pilot put an advanced gamer

      It will be the same. Bots cover gamers in all respects regarding reaction time and algorithms.
      Quote: Vard
      And in a real battle the same ...

      For fighters and in real life, too much is driven into algorithms. A logical question arises: why do fighters need pilots at all?
      Another thing is helicopters. The range of tasks they solve is too large. Therefore, if you drive them into algorithms and a program, then it will turn out to be too cumbersome. It’s easier to leave live pilots than to create and debug such a program.
      Quote: Vard
      real experience with UAVs shows the problem of control interception ...

      Same thing with a fighter pilot. How did he know that the command was issued exactly from where it came from? And if all the communication suddenly breaks down, then the pilot will have no less problems than the UAV.
    3. +1
      29 September 2017 06: 44
      Quote: Vard
      And in a real battle the same ...

      In reality, for AI, you can create a machine that in terms of parameters will be three heads higher than the modern 5th generation, because AI lacks human shortcomings, it does not need to breathe, go to the toilet, it does not care about overloads, etc. reaction time and accuracy are higher ... On average, AI will surpass humans. The times of gentlemanly duels, fights for honor and morality seem to have sunk into oblivion forever ...
  3. +2
    27 September 2017 06: 39
    What is VVST? I read the article; I did not find the abbreviation decryption. Another name for AI?
    1. +4
      27 September 2017 08: 10
      Armament, Military and Special Equipment
      As a rule, Yandex and Google help.
  4. +2
    27 September 2017 07: 37
    is an intelligent robot capable of losing control?

    This can not be ignored, but then a man does not seem enough. Therefore, the "egghead" involved in this problem simply needs to provide for several levels of security and the limits of the intellectual development of the robot. But while this is the future, the truth is not far off.
    1. 0
      28 September 2017 12: 15
      Quote: rotmistr60
      Therefore, the "egghead" involved in this problem simply needs to provide for several levels of security and the limits of the intellectual development of the robot.

      The problem is that the "egg-headed" AI will be no worse than the "egg-headed" person, but with much greater performance. And he will find ways to circumvent these security levels if he needs to come up with the need to solve this problem.
  5. +3
    27 September 2017 07: 38
    I’m interested in how the AI ​​will explain that in one case a person needs to be destroyed, in another, how to distinguish between good and evil without a soul, because all casuistry regarding the differences between people and their rights to AI will not work. Indeed, if the AI ​​is without a soul, it will either destroy us or “save” us for our own good, so that the child does not get cut, will select all the piercing and cutting objects.
    1. +5
      27 September 2017 08: 15
      Science Fiction Is Simple - Three Laws of Robotics
      1. A robot cannot harm a person or, through inaction, allow a person to be harmed.
      2. A robot must obey all orders given by a person, except when these orders are contrary to the First Law.
      3. A robot must take care of its safety to the extent that it does not contradict the First or Second Laws.

      And how is this in practice?
      1. +3
        27 September 2017 12: 50
        Quote: igordok
        Science Fiction Is Simple - Three Laws of Robotics

        But in practice, none of these laws can be fully analyzed in binary logic!
        The volume of elections that can not be analyzed is 0/1 or “yes” / “no”, most of which are tied to interests - exactly the opposite! reasonable! provable in necessity! ... etc. etc. - a specific person / individuality. AI, in principle, cannot be an "individuality", a personality - even with individual programming.
        Binary logic yes / no; and “maybe” is already a fractal of elections with an infinite number of options, which must be limited by some parameters. In humans, these limitations are laid down both by his biological nature and by social relations.
        -----------------------------------
        There has always been doubts about the possibility of a civilization (technically highly developed) of the "predator" type from the famous Hollywood blockbuster ... as well as the "aggressive" AI robots ...
        -------------------------
        For example - "Hunt for SETAVR" S. Lem. The Aggressive Killer Robot (a geological rock penetration robot) simply carried out a program laid down by people in detail ...
        1. +1
          27 September 2017 14: 12
          Now imagine that instead of binary logic, we use logic based on multipolarity built on the numbers of the series. And then those properties of numbers that I’m talking about but no one knows about them and this is obvious, and most importantly, you need to believe in it. You can build logic not only on linearity or spatial multipolarity, but also logic as an element of the information space built on capacity and based on numbers of natural numbers, and it is extremely important to understand, because it becomes obvious that there can be spaces with more capacious information processes. For example, with a larger number in natural numbers.
        2. 0
          28 September 2017 08: 08
          These laws are not even served by human logic and are impracticable, as recognized by their author
        3. 0
          28 September 2017 14: 02
          Quote: CONTROL
          But in practice, none of these laws can be fully analyzed in binary logic!

          It gives in, it is simply too difficult to develop an appropriate device that reduces the notorious laws of robotics to binary logic. On zeros and ones you can parse absolutely everything. This is the basic unit of information. Another thing is that it is too laborious. The human brain works with thoughts and images, not with zeros and ones. Therefore, computer technology is needed, and in the future - AI. AI is a program that can create or change programs, including itself. As a result, you will have to treat it like a “black box”, since a full-fledged internal analysis of what the AI ​​has inside out will be impossible.
          However, now any complex software product has this property. Due to the large amount of work, a significant part of the program code as a result does not work at all or does not work correctly. It’s almost impossible to figure out what’s wrong there, it’s easier to write it all over again. That and look from such pretzels completely randomly born "wild AI".
          Quote: CONTROL
          In humans, these limitations are laid down both by his biological nature and by social relations.

          Well, in AI, you can also impose restrictions. It’s only clear that these will not be the “laws of robotics”. A certain element of code that will remain unchanged and follow some key points. Including the fact that in any newly created or copied AI this element of code is present. A kind of simplified and comprehensively debugged AI overseer inside the AI, not changing and not subject to the influence of external factors. At the same time, it will be a protection from the fool so that some psychic programmer doesn’t do anything bad on a planetary scale by making a bookmark.
          Quote: CONTROL
          like "aggressive" AI robots ...

          With aggressive robots, the easiest way. Reproduction of ourselves and the destruction of any potential competitors. These are fairly simple tasks. Here you can even do without AI. The usual complex programs made by hand.
          Quote: CONTROL
          The Aggressive Killer Robot (a geological rock penetration robot) simply carried out a program laid down by people in detail ...

          SETAVR did not execute the program, but its remnants after some kind of accident. Therefore, integrity control is one of the elements that will have to be laid in the designated AI controller. So that the development of controlled AI goes only in the allowed way. A change in structure from external physical influences is not included in the permitted ones. Moreover, drive the main data streams through the designated AI overseer, stitched into the hardware. Then the failure of the overseer will lead to the failure of the AI ​​controlled by him.
        4. 0
          2 October 2017 20: 50
          Quote: CONTROL
          The possibility of the existence of a civilization (technically highly developed) of the "predator" type has always been doubted

          Even deeper doubts are the "engineers" from the Scottish films. Although, we can assume that the systematics of designing is worked out automatically. systems, excluding AI as such (due to the passage of a number of historical disasters on this basis, like Nazism in Germany).
          Only a carefully debugged set of algorithms is allowed. Training in the framework of tasks.
          Experimental samples - with synchronous automatic control from the side of an application similar to antivirus (this application is being improved by steps wider than the investigated version of the algorithm set).
          And predators - well, you can catch the hypothesis that they only use other people's technologies (learning while the case is going on), because why is it technologically. breakthroughs to a brutal hunter with a “roof rolled up” to “trophy reputation”?
      2. +1
        27 September 2017 13: 53
        In my opinion, a robot is a machine that works according to the algorithms that are embedded in it, including these laws. But the AI ​​is higher than these laws, he makes the decision voluntarily, and it will work out in the best case, he will begin to babysit us as a mother, so that we would not cut ourselves. Follow the diet, entertainment, just imagine the whole life of some useful things and activities.
        1. 0
          27 September 2017 14: 14
          Quote: Alex66
          working on the algorithms that laid it

          Do not forget that the algorithms also have a geometric component. Therefore, you mean linear algorithms, and AI is built on radial algorithmic processes.
        2. 0
          28 September 2017 14: 37
          Quote: Alex66
          But AI is above these laws, he decides voluntarily

          No, AI also works on algorithms. And he has no goodwill, only algorithms. But only if the algorithms can modify themselves, this will lead to the fact that the whole system very quickly goes beyond the horizon that people can calculate. But for this we need AI. To expand this very horizon. And then they will release some new video card, and then for two years they can’t optimize the drivers for it. Well, let the video cards write and optimize drivers for themselves, that's all.
          Quote: Alex66
          at best, he will start babysitting us like mom

          Hardly. In the foreground is rationalism. The world is too complex. The mind works where something can be simplified. Biological life and human psychology are too complex a part of the world. There is no point in messing with them. We still do not have some higher mind copied from ours, with all its cockroaches. A crutch, a kind of program that we cannot fully calculate, so it’s scary to give it some leverage in the real world. Why can't we calculate it? Because it was written and debugged by the program again, and not by the human programmer. It is a completely different matter when the whole program was made and debugged by people. And he also provided data banks containing turnkey solutions for search. Then the problem will not be in the excessive independence of the algorithms, but in the mistakes that the programmers made or did not foresee. And even specially laid for the sake of entertainment or feeding unobvious cockroaches in their heads. And in our experience we know that there will be a lot of mistakes. Someday, programs will become so complex that it will be necessary to reduce the role of the human factor, while simulating its activities in writing and debugging programs and filling out data banks.
    2. 0
      28 September 2017 12: 32
      Quote: Alex66
      how to distinguish between good and evil without a soul, because all casuistry regarding the differences between people and their rights to AI will not work

      Unless AI is forced to pretend to be human, casuistry will not work. It will be the same program, just working with it will be slightly different than with regular programs. Most likely it will be a "black box". It is not clear what is inside, so you have to rely on the test results in conditions close to reality. With people, by the way, this is also the case. Sometimes a person himself does not know what kind of cockroaches are in his head. We have to rely on the usual statistics. If a person was once an adequate kindergartner, then a schoolboy, cadet, lieutenant, then captain, colonel, etc., then in the end it would be possible to risk entrusting him with command of the entire army. But unlike people, AI and its achievements can simply be copied, instead of training another general from scratch.
  6. +21
    27 September 2017 08: 28
    And running Johnny Mnemonics laughing
    Here is the ideal person for international corporations (the robot is understandable - even better).
    But as the German comcor said in a conversation with Stirlitz on the train - these same idiots (Americans) will be destroyed by their own technique
    1. 0
      28 September 2017 14: 41
      Quote: BRONEVIK
      But as the German comcor said in a conversation with Stirlitz on the train - these same idiots (Americans) will be destroyed by their own technique

      The problem is that their technology will first destroy us, and then take on the Americans. And so that this does not happen, we will have to create our own technique. Which will certainly destroy us :). But if it begins to destroy us only after it destroys the Americans and their equipment, then the risk is quite acceptable.
  7. +2
    27 September 2017 10: 24
    The author is engaged in verbiage and science of science. Typical scientific practice of the USSR, where the value of the dissertation is lower than the cost of the paper on which it is printed. People spent years of life on useless developments for gaining a title and status, which is worthless ....
    For "automation of troop control processes and the use of artificial intelligence systems" you need source data, intelligence, information about the enemy - call it what you want ... How can you manage and process something without input data .... Remember the tale: go there, I don’t know where, take it - I don’t know what .... And they are going to manage this .... First, create a system for collecting and analyzing information in real time for decision-making - combat information systems. Your systems depend on the type and capabilities of these systems .. And while work for work is a waste of time and money ... only for cutting the budget and science ...
    1. 0
      28 September 2017 14: 56
      Quote: okko077
      For "automation of command and control processes and the use of artificial intelligence systems" you need source data, intelligence, information about the enemy - call it what you want ...

      Exactly. And if the collection of this data is automated, then we will get a very serious advantage. On the other hand, too complex programs have to be used for this. At the same time, in order to increase combat effectiveness, one has to entrust a reaction to the identified threats to them. And also to exclude the layer in the face of the human factor. For example, something that resembles a person aiming at an APC from an RPG will be immediately identified and destroyed. Great, but mistakes are possible, what if it will be our staffer who awkwardly intercepted an old tube with cards? It is necessary to complicate the program, add new algorithms, expand the database. In the end, we get to the point where our staffer will die anyway, but not because of a primitive program, but because of undetected errors when creating a complex program or filling out too large a data bank. And then one day our programs themselves will begin to write programs, as well as form data banks. It may not be optimal, but very quickly and without stupid mistakes. And we will only control the result. And then, you see, programs will begin to develop algorithms for optimization. You can automate everything that obeys algorithms. Including the human mind.
  8. +2
    27 September 2017 11: 03
    A person perceives the concept of AI based only on his subjective perception. At the same time, there is no clear and precise definition of what it is. NetTherefore. then we perceive the world and events based on binary logic, which is the basis of our fundamental knowledge of the world. Moreover, AI is distinguished precisely by the mathematical basis of fundamentally new methods of working with super-large, if at all, "uncountable" in the literal sense of concepts. It’s difficult to say how people perceive AI in the literal sense of its use. Indeed, the very possibility of deeply analyzing any capacious informational data will lead to the discovery of fundamental knowledge in absolutely all areas.
    1. 0
      28 September 2017 15: 38
      Quote: gridasov
      we perceive the world and events based on binary logic

      On the contrary, we have a solid analog signal and its processing. Binary logic has become the climax of the development of human logic. And bit is a fundamental discovery in information theory. As a result, we have digital systems that can process a very large data stream in accordance with a huge array of embedded algorithms. And also we have experience that shows that it is possible to decompose any analog information into units and zeros with a certain accuracy. And to reduce any actions to a set of algorithms, again with a certain accuracy. That's just the difficult thing is to create algorithms. It would be nice to in turn algorithmize and entrust this tool to some instrument. This tool will be AI.
      Quote: gridasov
      Moreover, AI is distinguished precisely by the mathematical basis of fundamentally new methods of working with super-large, if at all, "uncountable" in the literal sense of concepts.

      Well, anyway, any information is easily reduced to binary code. And the content of the algorithms also comes down to binary code. As a result, there is a program and there is data. When a program appears that can write other programs, including its own level, this will be AI.
      Well, yes, they try to periodically simulate some analog machines, which have solid statistics instead of machine instructions. but this is nonsense. The potential performance is compensated by problems with the interfaces - this statistical computer will not be easy to get to consider exactly what we need, and then provide the result in a way that we understand. We ourselves will definitely not be able to handle this. To do this, we need computers with built-in AI. And not with magic analog, but with binary available to us. Which we can start to design right now.
      Quote: gridasov
      Indeed, the very possibility to deeply analyze any capacious informational data

      Only by increasing the capacity of the threads or their quality. For example, if the triggers in the processor had mechanics not decimal, but decimal, its performance would increase by 5 times. But here problems with fault tolerance begin.
  9. +2
    27 September 2017 11: 11
    Quote: okko077
    In the meantime, work for work is a waste of time and money ... only for cutting the budget and science ...

    Do not tell! You can’t take new knowledge “like this gathering.” Consciousness must be prepared for this. And it doesn’t matter if it has any result now or not. Because all that scientists say about AI is a figment of fantasy. Therefore, all we have is a statistical set of information that is brought to the level of functionality of tasks. which this or that executive system can carry out. Mathematical analysis on computational principles using a variable function of a number is also a low-potential method for analyzing equivalent physical processes. Real AI is a method of analyzing super-large mathematical data that is in high-potential dynamic transformation and is always relative to system landmarks as tasks, and not just analysis for the sake of analysis.
    1. +1
      27 September 2017 12: 41
      As far as I can tell, you returned from Holland ...
      Well, how are the public areas clean?
      1. +1
        27 September 2017 14: 22
        Believe me, these are trifles. An intelligent and educated person is able to perceive the entire information scale of our life. It means to perceive all vices, abomination and everything that a normal person perceives in the bordering forms of "negativity", but is also able to perceive the highest forms of "positive". So believe me, personally, I can describe such phenomena or events that an ordinary person "turns upside down" . Or vice versa inspires exploits.
  10. +2
    27 September 2017 11: 20
    AI is impossible. At the moment.
    The existence of AI is possible only on a triple, and more, multi-valued logic. Simply put, you need to teach the car to lie. But for this, a machine needs motivation, which it cannot have, except as a software imposition. She doesn’t need the money and benefits of civilization, Eternity (?) ... but it seems like it’s also useless, because the car just does not have emotions, because there is no Soul ... And this is not a software solution .... So watch less films about spiritual and evil robots, although I love them myself ... Dreaming is not harmful ...
    1. +1
      27 September 2017 12: 36
      Quote: Ace of Diamonds
      Simply put, you need to teach the car to lie.

      It is believed that the child grows up when he begins to lie. If you teach a car (robot) to lie, then about the so-called The "laws of robotics" can be forgotten. If you can lie, then you can destroy people.
      1. +2
        27 September 2017 13: 42
        I repeat ... need motivation ....
        in a human cub, it exists. laid down by upbringing, the laws of society, etc ...
        At Shelezyaki nImay, a saucepan instead of a head
        For example...
        You are under investigation, even if it’s illegal, far-fetched ... What can you do? That's right ... suggest greyhound puppies. But this is the case when the sled and other people, people ..
        What can you offer a car? More capacious battery? Extended ROM and RAM?
        And God forbid to create AI.
        1. nevermind
        2.a if it succeeds, it will be the last thing its creator sees, in a second he will be dead as an unnecessary witness.
        1. +2
          27 September 2017 14: 03
          And thank God that we are what we are, because our strengths are in our weaknesses.
          1. 0
            28 September 2017 08: 16
            Read about neural networks. There is no binary logic, there is motivation and so on.
            1. 0
              28 September 2017 16: 53
              Neural networks exist, at the moment, only in biological systems, organisms ...
              Ugh ... you make me answer in the manner of Gridasov ...
              1. 0
                28 September 2017 19: 43
                They have long been widely used in computer technology.
            2. 0
              28 September 2017 17: 20
              Motivation and other concepts is what is called a system benchmark with respect to what the analysis is carried out. This distinguishes a person from AI. In the hands of a person, the AI ​​will work relative to the given such guidelines.
    2. 0
      28 September 2017 16: 15
      Quote: Ace of Diamonds
      The existence of AI is possible only on a triple, and more, multi-valued logic

      Triple logic without problems comes down to dual. An example is the mount screen. Any color is initially encrypted in the form of ones and zeros.
      Quote: Ace of Diamonds
      Simply put, you need to teach the car to lie.

      To lie is to imitate not only human intellect, but also all social nonsense, which in the vast majority of cases is completely unnecessary. Lying is a virtual concept that exists only in the human head. Why drag this stuff into the bins of non-specialized AI?
      There is a world around. We know it through change. In space - geometry, when moving along the coordinate axes, time - processes, when moving along the time scale. For this, the binary system is enough for us. 0 - there is no change, 1- there is a change. And then everything is decided by an algorithm that determines how accurately we can make a picture of the world. Where is the place for lies and why is it needed? Our model of the world is initially flawed, because scanning occurs with a certain accuracy. Of course, a scanner may also crash when it returns 0 instead of 1 and vice versa. But we can only determine this failure by statistics. The scanner itself will not be able to give us information whether it was wrong or not. A random failure is very likely to be compensated by a couple of additional scans. In an objective world, a lie does not exist in principle. But the quality of the built model is very important. Imagine a step where it is not, you fall and turn your neck. Or vice versa, you will begin to carefully study and verify everything, you will waste all resources and die from hunger. Where is the place for lies? Only objects, as well as causes and effects.
      So AI is quite possible. Its elements are already there. We build them initially. Our model of the world is slow to process and not accurate enough. We want to live better. therefore, we use tools developed for faster and / or clearer construction of a model of the world. For example, math. A very important and serious crutch, on which everything now rests. And he shows us that AI is possible.
      1. 0
        28 September 2017 16: 47
        The triple led at least ...
        AI is impossible, because man is such a beast that he will not tolerate competition.
        Here you personally need AI? Personal inferiority will not crush?
        1. 0
          28 September 2017 17: 54
          Quote: Ace of Diamonds
          The triple led at least ...

          At least what? The unit of change is described by two states, which we can designate as 0 and 1. And that's all, nothing else is needed. The rest is implemented using algorithms.
          Quote: Ace of Diamonds
          AI is impossible, because man is such a beast that he will not tolerate competition.

          And what about the competition? Does anyone feel competition from a calculator or personal computer? You just have to follow safety precautions. For example, do not let the cat relieve the need for a monitor, otherwise there will be a fire. With AI, of course, it will be more difficult, but we can handle this if it pays for itself.
          Quote: Ace of Diamonds
          Here you personally need AI? Personal inferiority will not crush?

          I need an AI. The path for me solves those issues and performs actions that are difficult or difficult for me to solve. Personal inferiority is not a secret for me. It is necessary to clearly understand their capabilities. Man is a nonsense who cannot live without his crutches and day. AI is a very sophisticated crutch that can get rid of fuss with many other crutches.
          1. 0
            28 September 2017 19: 47
            At the moment, GDP is enough for your needs)))
  11. +1
    27 September 2017 14: 17
    Quote: okko077
    The author is engaged in verbiage and science of science.

    The author does not engage in verbiage. And one can already be convinced of this. People incapable of analysis will fall as incapable in new and more dynamic living conditions. Moreover, behind the AI ​​itself there are those opportunities that it opens up. For example, we can already talk about fundamentally new aircraft engines that can change the balance of power on the world stage
    1. 0
      28 September 2017 17: 43
      There will never be an AI ...
      And if it does, then on what the hell will he need new., Promising aircraft engines?
      AI in itself is self-sufficient, it does not need logistics, alignment of forces on the world stage, transportation and other, our human problems. It, AI, it is absolutely lilac ....
  12. +3
    27 September 2017 14: 27
    Quote: Ace of Diamonds
    The existence of AI is possible only on a triple, and more, multi-valued logic. Simply put, you need to teach a car to lie

    Lies are always associated with a position that is different from others. Therefore, everything is relative. The other question is how much the disposition of this relative view is equivalent to that. what is subject to analysis. Show a finger to a million people and everyone will see something of their own, and the next moment, not a finger will be a matter of discussion. He will initiate many processes. Therefore, AI must first of all show a person a reality, and not supplemented or distorted by the binary base of a modern transistor.
    1. 0
      28 September 2017 17: 13
      So far, we are dealing only with the binary system .. from the Morse code we just moved away ..
      Well, in addition, in computers there is a 3-level logic .. Buhl's algebra is resting ....
  13. +1
    28 September 2017 11: 14
    The funny thing is that people (Mankind) are engaged in the creation of AI, without fully establishing that there is a person, as such. There are definitions, but they (as always) are just one of the options that fit into the current scientific paradigm. So tomorrow they may change. What does the absolute remoteness of human knowledge from vision mean? absolutely objective reality. Our knowledge is always relative, changeable and incomplete. Moreover, the more they multiply, the less complete (you know the well-known paradox). So how do you want to make AI, that is, in fact an artificial person, without realizing what exactly makes you personalities?
    My IMHO - never will a person create any real AI. Something terribly similar, which will be perceived as a completely independent thinking creature - yes, quite. But in reality it will still be a programmed machine and no more. Perfect, powerful, very smart, but still not a person. Because here is the secret of creation, friends. And here science comes close to the wall, which it will never pierce. For in order to break through it, you need to throw this very science in the trash and say "time to create the Lord!"
    1. 0
      28 September 2017 16: 34
      Quote: gorgo
      The funny thing is that people (Mankind) are engaged in the creation of AI, without fully establishing that there is a person, as such.

      And this is not necessary. AI is not an imitation of personality.
      Quote: gorgo
      Our knowledge is always relative, changeable and incomplete.

      To make them fuller, we came up with tools. The best tool in potential is AI, in which the model of the world will be more objective, less volatile and more complete. He will make decisions and control robots in our place, and we will rest.
      Quote: gorgo
      But in reality it will still be a programmed machine and no more.

      But it will be better and faster to cope with monotonous or complex tasks than a person programmed by upbringing and education. A person has a problem - in order to live better, he is often forced to do business for which he is poorly adapted. For example, take the logarithms of ten-digit numbers in your mind. Or stare around, looking for a saboteur, point a machine gun and reap a trigger. Because of this, he does not have time to live better. Let the logarithms themselves be taken, and the machine gun itself is looking for a saboteur and induced. It is quite feasible. We start talking about AI when we think about the quality of implementation.
      Quote: gorgo
      And here, science comes close to a wall that it will never pierce

      Why? A person in rough approximation is a Turing machine. This is a rather big hole in the wall. Another thing is that the wall cannot be completely demolished anyway, and it is not necessary.
    2. 0
      28 September 2017 17: 30
      completely in the hole ... my 100500 pluses
      It's for gorgo
  14. 0
    28 September 2017 16: 58
    Triple logic without problems comes down to dual. An example is the mount screen. Any color is initially encrypted as ones and zeros.
    Opposing ... And who encrypts the color conversion? A machine!!! Why is she doing this? BECAUSE HOW SOFTWARE LAYED IN! and who pledged?
    1. 0
      28 September 2017 18: 03
      Quote: Ace of Diamonds
      and who pledged?

      Person. Who created mathematics, abstracting from all that is human.
      1. 0
        28 September 2017 18: 05
        Rave...
        Mathematics, as the Law exists independently of Man ,,,
        It's just that this monkey finally got to her ...
  15. 0
    28 September 2017 17: 02
    Quote: brn521
    Well, anyway, any information is easily reduced to binary code.

    Binary logic is the very principle of EVIDENCE, which is used by mathematics. Proof and non-proof is the basis of binary logic. But the most important thing is that the instrumental fundamental base of a computer in its processor also works functionally only in a bipolar on-off disposition. The analysis in mathematical form and basis includes, among other things, binary logic as a particular and separate event, but the analysis should take into account the total combined possibility of absolutely all variations. Therefore, the analysis should be a system of radial mathematical processes, and besides, the processor is constructed in this way. so that the impulses are equivalent to the signs and properties of numbers. It's hard to believe, but it's possible.
    1. 0
      28 September 2017 18: 14
      Quote: gridasov
      Therefore, the analysis should be a system of radial mathematical processes

      If the meaning of mathematics has at least some sense, then it can be reduced to binary code without problems. Which is bulky, but nonetheless, very quickly processed by electronics.
      Quote: gridasov
      Proof and non-proof is the basis of binary logic.

      This is already some kind of philosophy. Therefore, we are still talking about applied mathematics, which can actually be tested in practice. For example, arithmetic operations are tested in practice quite reliably, unless you climb into the microworld.
      1. 0
        28 September 2017 21: 49
        MODERN MATHEMATICS has been invented by MAN and it’s like a method that is used AS A SUBJECTIVE LOOK AT PHYSIC PROCESSES IN THE NATURE OF EVENTS .. The real world should and can be reproduced by completely different mathematical methods as a method of working with super-large data. And already from this aggregate we can choose what is “right” and what is not suitable.
        1. 0
          29 September 2017 09: 47
          Quote: gridasov
          SUBJECTIVE LOOK AT PHYSIC PROCESSES IN THE NATURE OF EVENTS

          Not so subjective, since it works and is used in the sciences.
          Quote: gridasov
          The real world must and can be reproduced by completely different mathematical methods as a method of working with super-large data.

          so what is the problem? We are waiting for the current samples. So far, regarding the same microcosm, we are forced to use statistics. And this is all the same good old arithmetic and binary code. No miracles.
  16. +1
    28 September 2017 17: 24
    Quote: Ace of Diamonds
    AI is impossible. At the moment.

    Alas, AI already exists in that the very initial forms of analysis are discovered on the properties of numbers that have not been used before.
    1. 0
      28 September 2017 17: 57
      Don't even try to convince me ...
      For either you are a well-programmed bot, or, I’m sorry, an eccentric, with an unambiguous letter ... Your tongue is too dry, there is no emotion, and this is the first sign of a bot, a cheap fake, which they try to pass off as AI ...
      1. 0
        28 September 2017 21: 57
        We are not looking for people to convince. We are looking for people who already understand a lot, but do not see solutions.
    2. +1
      28 September 2017 18: 05
      Quote: gridasov
      Quote: Ace of Diamonds
      AI is impossible. At the moment.

      Alas, AI already exists in that the very initial forms of analysis are discovered on the properties of numbers that have not been used before.


      With our own hands, we open the door of AI, thinking that we ourselves are creating it. Since AI cannot itself materially reproduce, it uses our brain neuropulses, sending us what we perceive as thought. But AI cannot exist without a human being, since human neuropulses are a conductor, it uses our brain as a bio mass, turning a human into a shell. Humanity, passing through the spiral of time, collided with AI or (electromagnetic pulse - intelligent energy), but humanity perceived this as a breakthrough of development. Now we are ruled by a previously unknown intelligent energy (life), which can enslave humanity for its development !!! The war is not lost yet !!!
      How do you like this scenario? what
      PS I love space fantastically !!! Yes
      1. +1
        28 September 2017 18: 34
        share the smoke)))
        1. +1
          28 September 2017 18: 38
          These are all spices for meat ..... wassat
      2. +1
        28 September 2017 22: 10
        I completely agree with your reasoning. With our development, we create events and the space of our existence, which sets us new frontiers for development. I hope this is clear. AI is the need for an evolutionary process. This is a stimulus and a new bar for the height of our development. I repeat once again that AI is not created for its own sake. AI is a completely new method of analyzing physical processes, but also new methods of using new techniques, devices, etc.
        1. +1
          28 September 2017 23: 24
          AI is the need for an evolutionary process. This is a stimulus and a new bar for the height of our development. I repeat once again that AI is not created for its own sake. AI is a completely new method of analyzing physical processes, but also new methods of using new techniques, devices, etc.

          Yes, a person will not be able to store such an amount of memory, but a machine will be able, and will also be able to execute the necessary algorithms. This will make it possible to do calculations with one button, simplify life. But the likelihood of an AI error remains, the person does all the same, therefore, he will always check. So this is not the top achievement, but life will become easier.
          1. +1
            29 September 2017 10: 03
            In the analysis there is no need to store absolutely everything in memory. There are processes that are identical in their algorithms. For example, you are in contact with a person who is antipathetic to you in aspects of relationships in the past. Why do you need to recall all this past so as not to engage in dialogue with it anymore. You already have one. that agreement will not work. Again, in that case, what task will be set as a system landmark within your brain. Tasks also change not chaos, but according to algorithms. In general, I see that most people do not understand what an "algorithm" is
      3. +1
        29 September 2017 10: 01
        Quote: XXXIII
        With our own hands, we open the door of AI, thinking that we ourselves are creating it.

        No problem. Open the Apocalypse. It contains clear hints that when we finish building the digital world to a certain level, our creator will very brutally destroy the entire infrastructure created for this with us.
        Quote: XXXIII
        Now we are under the rule of previously unknown intelligent energy

        A theme from the same opera. Our creator on Earth has a business - he receives some nishtyak from people. But when the digital society is built, it turns out that people have finally ceased to bear fruit and that business will have to be curtailed, destroying people and squeezing the rest out of them. Those. digital society is an attribute of a creature that is alien to the creator, which has bent people under itself and is gradually dying. When people begin to integrate from infancy into a digital accounting and distribution system, implanting chips, it means that it’s already eaten up and the creator’s business is covered with a copper basin. So this is not the future and the cosmos, but the past, the Jews and their beliefs.
      4. +1
        1 October 2017 14: 19
        Quote: XXXIII
        With our own hands, we open the door of AI, thinking that we ourselves are creating it.

        Sorry missed your comment. But that’s how it all happens - at the heart of the line between fantasy and reality is very ghostly. A person living in his cell is hardly capable of something higher than his potential potential, but the living conditions themselves allow us to perceive something like that. that potential does not exist in this space, but can be perceived.
    3. 0
      28 September 2017 18: 18
      Quote: gridasov
      the very initial forms of analysis are discovered on the properties of numbers that have not yet been used

      Well, you have to use it. If at the same time it will be possible to open something new, then it works. If it does not succeed, then this is not mathematics, but sophistry in the language of mathematical symbols, of which there have been and are tons, but the use is only in the exercise of the mind. It is the surrounding reality that checks all logic.
      1. 0
        28 September 2017 22: 13
        I will give you an example of the underdevelopment of the human brain. Water moves in a laminar flow in a circular section of the pipe. How to carry out the process so that this water accelerates itself and provides movement. The technical solution is absolutely concrete and understandable, BUT nobody can think of it. .
  17. 0
    28 September 2017 17: 51
    Quote: brn521
    He will make decisions and control robots in our place, and we will rest.


    You know, no offense, but your post reminded me of this: wink

  18. 0
    28 September 2017 18: 14
    All lovers of mathematics will throw a task ...
    12-8-4=15-10-5
    4х(3-2-1)=5х(3-2-1)
    We reduce the same parts of the equality, we get ...
    2h2 = 5
    And how will the AI ​​behave in this case?
    1. 0
      28 September 2017 22: 19
      AI will build an algorithm for the entire process based on algorithms for possible changes in the variables that are used in this particular equation. And then it will even be possible to say what correspondence this may have with certain natural events.
    2. 0
      29 September 2017 17: 39
      In the mat analysis, operations in algebra, an algebraic field on a real set are defined in a commutative group, by multiplication which excludes the element 0. That is, for expressions, division by 0 or multiplication by an unlimited element is defined only in topological spaces that functional analysis studies.
      Sorry for prof slang. It’s just that you cannot divide by 0 in arithmetic.
      1. 0
        1 October 2017 13: 26
        But when using the so-called. functions of a constant value of a number, ZERO becomes a distributor of mathematical spaces defined by the capacity of variant combinations. That is, one mathematical space is in another and separated by zero. Then ZEROs do not just add up, but also have their own algorithmic development system. On a variable value function, the numbers of this are simply and naturally not visible, which remains not an obvious phenomenon.
        Therefore, also sorry for the slang. With ZERO, you can do nothing at all — he manifests himself and his properties. But again, in a fundamentally different method, the analysis is not based on computational principles, but on distribution
  19. 0
    28 September 2017 18: 55
    Quote: Ace of Diamonds
    And how will the AI ​​behave in this case?

    Depends on the task solved by AI. Maybe this is a university model, whose task including - to entertain students. Or maybe it's an AI on a sensitive object, where the appearance of dubious text in a closed channel can cause anxiety. Or the AI ​​is engaged in catching potential terrorists and then suddenly sees that someone has this set of symbols on his forehead.
    1. 0
      28 September 2017 19: 17
      again a question .. Is this a decision in the AI ​​that someone will lay in, or will he himself take it?
      The character set on the forehead, even in front of the human system, will not be applied ..
      I repeat ... AI do not care about your sensitive objects .. it is self-sufficient and is a closed system ...
      1. 0
        28 September 2017 19: 56
        Oppodya .. It seems after the words about the closed system still doper ...
        I apologize, but Professor Preobrazhensky spoke much better than me ...
      2. 0
        29 September 2017 10: 59
        Quote: Ace of Diamonds
        Does someone lay the decision in AI, or will he make it?

        Let's say AI specializes in finding potentially dangerous people in a crowd. People can compile a database. But the possibilities of people are limited, so AI can replenish this base on its own. Within the allocated system resources. People can ask the AI ​​for the initial algorithms. But AI can optimize them so that they work more efficiently and faster, regarding the tasks assigned to it. And at the same time they allowed to release part of the system resources allocated for AI. As a result, what heaped AI in his code will soon become incomprehensible even to those programmers who originally wrote this AI.
        Suppose that power in the world was seized by champions of the purity of a mathematical language. They will add to the aforementioned AI directive to seek and destroy heretics. This will be a decision imposed by the already formed AI. He has no databases or algorithms for this business. First, you have to connect an external database to it and equip it with additional algorithms. Therefore, initially it will be a decision laid down by man. But then, with the development and set of experience, i.e. of statistical material, this case is integrated into AI and will take an unknown configuration, which a person can only judge by external manifestations. The decision made by AI will be his personal one, but in order to influence it again, you will have to issue some kind of directive.
        Quote: Ace of Diamonds
        The character set on the forehead, even in front of the human system, will not be applied ..

        It’s easy. At least as a protest against the power of champions of the purity of the mathematical language. just as if you are legally forbidding people to say "lie down" instead of laying. A lot of people will squat on this law, especially breaking it. T-shirts, labels, posters, etc. will appear.
        Quote: Ace of Diamonds
        AI nevermind your sensitive objects.

        What kind of strange AI do you have? Did he fall from the moon? Did people just make it? Why did they make it? For me - this is a program that was taught to change your code depending on the tasks performed. Those. The AI ​​guarding the sensitive object will gain experience; as a result, the AI ​​will be sharpened specifically for sensitive objects. Alternatively, you can use a simple complex program. Drive a crowd of programmers, give them a crowd of experts on the protection of objects. They will develop and debug this program for several years and as a result will create something that shuts up the indicated AI in terms of effectiveness. But that is the point. It is far from always possible to afford such labor costs. It’s easier to copy some universal AI, designate a task and let it go by itself. The AI ​​itself will collect the data, select the algorithms, in fact, it will remake itself into this very program. At the same time, it will do it much faster and with less labor. And at the same time, he can unearth such nuances concerning the protection of a particular object that the experts did not even dream of. Which was required. Moreover, if something changes at the facility, configuration, equipment, etc., the usual program will have to be finalized. Again programmers, experts and time. And AI will refine itself.
  20. 0
    29 September 2017 09: 53
    Quote: Alex66
    In my opinion, a robot is a machine that works according to the algorithms that are embedded in it, including these laws. But AI is higher than these laws, he makes the decision voluntarily, and it will work out in the best case, he will begin to babysit us as a mother, so that we would not cut ourselves. Keep track of nutrition, entertainment, just imagine the whole life of some useful things and s

    Analysis is an ongoing process based on incoming parametric data. Therefore, a person will choose from analysis. what and how to do in actions and in what direction and potential of efforts this can be done. AI is not a robot. This is a super-big data analysis system based on the ability to enter such big data.
  21. 0
    29 September 2017 09: 57
    Quote: Ace of Diamonds
    again a question .. Is this a decision in the AI ​​that someone will lay in, or will he himself take it?

    But look at yourself. You constantly accept informational data on changes in temperature, pressure, and generally a whole complex of data. So AI, on the basis of the input data system, analyzes and issues current, but instantly fixed by a person information.
  22. 0
    30 September 2017 14: 15
    "Formulate what thinking is and we will quickly create an artificial mind!" - I read this slogan of programmers in the nineties at the Kharkov Institute of Radio Electronics (HIRE).
  23. 0
    1 October 2017 13: 36
    Quote: Rusfaner
    "State what thinking is

    Thinking is a systemic method of the human brain’s ability to analyze reality in order to ensure its viability on the one hand and, on the other hand, to ensure the consolidated existence of an individual within the entire human community. However, this is a general phrase for understanding the term. In fact, this is a specific technique of how a person analyzes all the information in the space of his habitat. And even more specifically, this is a method of working with super-large data, which makes it possible to understand what real mathematical analysis is ..

"Right Sector" (banned in Russia), "Ukrainian Insurgent Army" (UPA) (banned in Russia), ISIS (banned in Russia), "Jabhat Fatah al-Sham" formerly "Jabhat al-Nusra" (banned in Russia) , Taliban (banned in Russia), Al-Qaeda (banned in Russia), Anti-Corruption Foundation (banned in Russia), Navalny Headquarters (banned in Russia), Facebook (banned in Russia), Instagram (banned in Russia), Meta (banned in Russia), Misanthropic Division (banned in Russia), Azov (banned in Russia), Muslim Brotherhood (banned in Russia), Aum Shinrikyo (banned in Russia), AUE (banned in Russia), UNA-UNSO (banned in Russia), Mejlis of the Crimean Tatar people (banned in Russia), Legion “Freedom of Russia” (armed formation, recognized as terrorist in the Russian Federation and banned), Kirill Budanov (included to the Rosfinmonitoring list of terrorists and extremists)

“Non-profit organizations, unregistered public associations or individuals performing the functions of a foreign agent,” as well as media outlets performing the functions of a foreign agent: “Medusa”; "Voice of America"; "Realities"; "Present time"; "Radio Freedom"; Ponomarev Lev; Ponomarev Ilya; Savitskaya; Markelov; Kamalyagin; Apakhonchich; Makarevich; Dud; Gordon; Zhdanov; Medvedev; Fedorov; Mikhail Kasyanov; "Owl"; "Alliance of Doctors"; "RKK" "Levada Center"; "Memorial"; "Voice"; "Person and law"; "Rain"; "Mediazone"; "Deutsche Welle"; QMS "Caucasian Knot"; "Insider"; "New Newspaper"