US virtual dogfight: AI fighter defeats real pilot 5-0

53
US virtual dogfight: AI fighter defeats real pilot 5-0

The American Advanced Projects Agency (DARPA) announces work on a program to integrate artificial intelligence with the work of combat pilots aviation... We are talking about the ACE (Air Combat Evolution) program, during which virtual air battles with artificial intelligence and the use of demonstrators of fighters in service with the US Air Force are already being carried out. It is pointed out that such training helps to reveal "the pilot's degree of confidence in artificial intelligence."

Simulation battles were organized at the Applied Physics Laboratory of Johns Hopkins University.



Ultimately, it is planned to transfer the virtual aerial combat option, which uses artificial intelligence (AI), to the real environment.

The essence of the experiment at the moment is as follows: several teams must act in the format of a simulation of air combat, which is carried out by F-16 fighters and others, in a computer version controlled by artificial intelligence. The battles were fought against experienced US Air Force pilots.

As noted, the system with artificial intelligence showed "its complete superiority over a real pilot." In five virtual air battles, five victories were won by artificial intelligence.

From the report:

The fighter (computer demonstrator), controlled by the AI, defeated the real experienced pilot with a score of 5: 0.

ACE Program Officer Colonel Dan Jaworsek says DARPA is now preparing to simulate real combat:

This will be a critical test for artificial intelligence systems, which should show themselves not in the computer, but in the real world. So far, you have to rely solely on the digital "picture".

It is noted that in the course of virtual battles, the advantage of AI over humans was expressed in a number of areas. For example, artificial intelligence made a decision much faster to attack an airplane of a simulated enemy with the use of certain weapons.

Experts note that this can turn from an advantage into one of the main disadvantages of AI, since it does not have critical thinking. The AI ​​cannot answer the question whether it is advisable at the moment to attack an aircraft, which it marks as an enemy aircraft. Therefore, an attack can lead to serious consequences.

DARPA TV:

53 comments
Information
Dear reader, to leave comments on the publication, you must sign in.
  1. +9
    23 March 2021 06: 52
    Objectively, on bends, a person will lose to a car ... too large transcendental overloads for the human body arise here.
    The only question is how much the AI ​​can be allowed to open fire on the target based on the findings of the machine.
    If a glitch occurs in the program for an unknown reason, it can open fire at its creators.
    In such a machine, it is necessary to pre-lay explosives for self-destruction in the event of an AI going out of obedience to a person.
    1. +4
      23 March 2021 06: 55
      Quote: Lech from Android.
      Objectively, on bends, a person will lose to a car ... too large transcendental overloads for the human body arise here.

      We do not know if the overload limits have been equalized.
      (Otherwise, it would not be possible to compare the quality of control, especially since there were no overloads on the simulator)
      For example, artificial intelligence made a decision much faster to attack an airplane of a simulated enemy with the use of certain weapons.
      Then the AI ​​won simply due to the speed of reaction and clarity of control.
      In fact, the F-16 has human-borne overload restrictions.
      1. +3
        23 March 2021 07: 02
        For example, artificial intelligence made a decision much faster to attack an airplane of a simulated enemy with the use of certain weapons.

        Yeah, and he didn't even have doubts about his correctness. How convenient to start the Third World War.
        What are we, we are nothing, this is all AI
        1. +8
          23 March 2021 07: 32
          Have you already had a comment about "this is all computer games for children"? From the cycle "a fool's bullet - a bayonet well done" and "all your steam engines are useless against a good sail", "UAVs - mugs of aircraft modeling".
        2. +4
          23 March 2021 07: 33
          The computers in DARPA have Ubuntu Linux, my respect. hi
          1. +2
            23 March 2021 11: 31
            To make a decision about the expediency of an attack, you must have complete information about the entire situation. Information may be unavailable, for example, when a space constellation is disabled. Here the AI ​​will be left without information, and then it is necessary to simulate the WB. In general, in the absence of information about the enemy, it is somehow difficult to create a combat model for the AI. Here I understand the model, I see - I'm attacking, and the enemy already seems to have the specified performance characteristics (well, for example, a specific F-16). In a real situation, everything is not so simple ... I think that AI can really make things easier for the pilot, but the independent use of AI in the database is still problematic. Making a decision to change the task based on the real situation of the AI ​​will be difficult, it seems too straightforward and less flexible, it will perform the task, as an example, when entering a given, reconnoitered, ground target, it suddenly turns out that the target is false, it is unlikely that the AI ​​will be able to carry out additional reconnaissance, according to search for a real target, and attacks a false target at the given coordinates. Well, this is the lyrics, we are talking about the WB. Let's return to the question of how they are going to apply it. If autonomously, then, as I indicated above, with a lack of information (disabling the space group, powerful interference limiting the exchange of information), the AI ​​will seem to be not quite cool and expedient, and the effectiveness of its use will be from the relatives of the Japanese kamikaze. There is no doubt that AI will be of great help to the pilot, but only if a golden balance is found for his admission to intervene in the pilot's work. Not giving him opportunities that can limit the pilot's decision-making.
            By definition, AI is capable of self-learning. While he is only being taught, so. that this is not really an AI, but rather a program that implements the WB according to the given algorithms.
            PS: Good or bad it all the same depends on the person. Handing over all decisions to the AI ​​.... scary for now. Of course, assistance to the pilot is good in any scenario, although extreme aviation accidents (the same crashes of Boeing thanks to the help of software), the loss of manual piloting skills by pilots and the occurrence of problems for pilots with failures of such systems, they say that not all that glitters is gold.
            Well that's it my personal opinion. recourse
      2. +7
        23 March 2021 07: 07
        Then the AI ​​won simply due to the speed of reaction and clarity of control.

        Imagine the situation that an aircraft with AI continuously twists Nesterov's loops, going into the tail of the enemy ... he does not feel anything, and the pilot is experiencing strong pressure on all internal organs until he loses consciousness ... he will not be enough for a long time.

        At 5g, an ordinary person loses consciousness ...
        1. -9
          23 March 2021 08: 00
          Imagine the situation that an aircraft with AI continuously twists Nesterov's loops, going into the tail of the enemy ... he does not feel anything, and the pilot is experiencing strong pressure on all internal organs until he loses consciousness ... he will not be enough for a long time.

          Not a single aircraft simulator has yet been created that cannot be distinguished from reality.
          The pilot can predict the stall of the air flow from the wing, or fly on the verge of stalling. AI can't do that.
          A pilot with billions of choices can intuitively choose the right one, and the AI ​​takes time to calculate all the choices.
          1. +3
            23 March 2021 08: 56
            If you create the appropriate model ... and put it into the program, the AI ​​can learn this too.
          2. +3
            23 March 2021 09: 06
            Quote: lucul
            Not a single aircraft simulator has yet been created that cannot be distinguished from reality.

            The key here is "more".
      3. +2
        23 March 2021 07: 38
        There are no "red barons" now, no. On the other hand, it would be interesting to see the battle between the car and the "red baron".
    2. +11
      23 March 2021 07: 01
      Quote: Lech from Android.
      exorbitant overloads

      AI is now being tested on conventional aircraft, which were designed based on the capabilities of the pilot. And then new cars will be designed for it with prohibitive maneuverability.
      1. +9
        23 March 2021 07: 03
        Quote: Waltasar
        And then new cars will be designed for it with prohibitive maneuverability.
        Technological progress cannot be stopped ...
        That will be so.
        But there will be no more people in the cabins either. At both sides.
      2. +2
        23 March 2021 15: 25
        And then new cars will be designed for it with prohibitive maneuverability.

        ... And get an anti-aircraft missile?
        1. +1
          24 March 2021 11: 58
          ... And get an anti-aircraft missile?

          Amen.
    3. -7
      23 March 2021 07: 04
      You reviewed the terminators). And what to do if in this case a failure or horror, the machine will understand and disable self-destruction. So far, a man is the most modern machine, it will take decades to create something similar in AI, although a man can do it, but he was given this opportunity by evolution and ... someone more intelligent.
      1. +7
        23 March 2021 07: 09
        Quote: TerraSandera
        And what to do if in this case there is a failure
        I suggest smoking (you can use Pedivikia) - what is "friendly fire"!
        However, also "crash"!
        On the machine - as if there were no fewer such "failures" ...
        1. 0
          23 March 2021 09: 35
          I do not "smoke" pedigree, in any form. Less or no less depends on the authors of the algorithms, that is, from the same people with the human factor. You can't get away from him.
    4. +8
      23 March 2021 09: 05
      A person has no chances to win against a computer-controlled UAV (AI is unnecessary there, everything is solved by algorithms without problems). You just need to come to terms with this and urgently develop unmanned fighters. Massive and cheap.

      The first training dogfight between a UAV and a fighter was in 1971. Between F-4 and BGM-34f
      On May 10, as a graduation exercise, two F-4 Phantoms, one under the command of Smith and both loaded with air-to-air missiles, flew to intercept MASTACS. The drone was remotely controlled by aerial combat instructor John Pitzen.

      “Talley-ho from the left wing,” Smith announced over the radio, meaning he was about to engage, but the drone spun sharply before it could lock the missile.

      “He turns like a mother,” Smith said.

      Smith lost the MASTACS drone, which continued to turn sharply until it made a loop and was on Smith's tail. The drone's ability to pull out 6-G turns for an extended period of time, which would cause a person to pass out, meant that they could turn 180 degrees in 12 seconds at high speed. An unarmed drone took up a firing position behind Smith.

      The same sequence was repeated over and over as the two Phantoms tried in vain to target the MASTAK drone. They simply couldn't stay in a firing position long enough for their weapons to lock properly. They fired two AIM-7 Sparrow radar-guided missiles and two AIM-9 Sidewinders heat-guided missiles without a single hit. If he had been armed, then the "MASTAK Firebies" would have easily dealt with both opponents.


      https://www.forbes.com/sites/davidhambling/2020/06/11/how-drones-beat-top-guns/
      1. 0
        23 March 2021 10: 06
        Quote: OgnennyiKotik
        A human has no chance of winning against a computer-controlled UAV

        Controversial issue. The indisputable advantage of the UAV in withstanding overloads. However, the explosive missile will still withstand more than G. So the need for extreme maneuverability is questionable. But the thinking of a living pilot is clearly more flexible than a set of instructions `` If then ''
        Quote: OgnennyiKotik
        You just need to come to terms with this and urgently develop unmanned fighters. Massive and cheap.

        There are no such things in nature. Ukraine recently bought 6 Bayraktars and two control modules for $ 70 million. This is the question of cheap and mass UAVs. And from the onboard equipment there is only an infrared camera and a laser designator. Plus a cheap lawn mower engine.
        1. +2
          23 March 2021 12: 25
          Losing a pilot together with the car in any case is both costly, and unreasonably heavy in consequences, and pointless with the modern level of electronics. In the future, the choice will be between either control over a radio channel (a protected wide channel over a long distance is also a rather nontrivial thing) or AI, or a combination of both. You can start from any end to get to work.
    5. 0
      23 March 2021 09: 21
      No explosives, Fuel is not infinite, and ammunition too.
    6. 0
      23 March 2021 17: 20
      The only question is how much the AI ​​can be allowed to open fire on the target based on the findings of the machine.


      Unlike a manned enemy aircraft, the AI ​​does not need to keep in its computer the thought in the background about "will I not fall under a military tribunal?" In a critical situation, the most acceptable algorithm for performing a combat mission is selected - and nothing more.
    7. -1
      24 March 2021 11: 56
      And people were sitting on the simulator? Then the battle is unequal in principle, because everything is different in comparison with reality ...

      When I was learning to fly a helicopter, and then on a light plane, I tried to play a flight simulator ... And I gave it up quickly, because what instinctively turns out in real life is inconvenient and slow-uncertain on a computer! This is especially related to camera control.
      Of course, the helmet will help a little, but the vestibular apparatus is still not involved, and the very representation of the picture by pixels does not contribute to good vision - in reality, the display is not digital, but analog.

      Of course, fighter pilots train on special simulators, but this, in fact, does not eliminate a number of disadvantages!

      There is experience of flying (about 6-8 hours) on a Boeing-737NG simulator with a mobile platform, but there is more work inside the cockpit with controls, screens that simulate the "street" are somewhat secondary, plus everything happens rather slowly on them. And then, you are shaking, but you are not flying anywhere - and you can feel it!

      At the same time, the computer with its planes gets a huge head start! What I mean is that he calculates the conditions himself and creates a "3d world" in which the battle is taking place. The imperfection of sensors responsible for pattern recognition, radars and other detection systems is being leveled.
      That is, the computer calculates the distance to the point, applies an algorithm from constraints and conditions ... And that's it! Target detected. The rocket flies perfectly, also according to the algorithm.

      But in real life it will be different! A living pilot will fly not only across the screen, but also by instinct. He will be more collected and more careful. And it will be able to trick the AI, for example by releasing a decoy drone (by that time something similar will happen), forcing the AI ​​to attack and detect itself.

      Plus, the systems on the drone themselves will be, in principle, more capricious: either the sensor is frozen, or the radar gives out a fuzzy picture that is not identifiable, then suddenly the electronic warfare is somehow incomprehensible turned on, that silicon atoms played in the tags ... tongue And he will not have a "live" look that is able to realistically assess the situation.

      That is, these virtual flights take place in the environment of the computer that generates opponents. Naturally, algorithms are being worked out. But "it was smooth on paper ..."

      Well, plus it is very expensive and profitable! What beautiful reports you can do ... wink
  2. The comment was deleted.
  3. 0
    23 March 2021 07: 01
    I do not understand, but where is the stealth? It looks like the battles of the Second World War. Who is fighting like that in the 21st century? Yes, and not missiles, as we see.
    1. +2
      23 March 2021 07: 18
      Quote: TerraSandera
      Who is fighting like that in the 21st century?

      This is not the case in computer games.
      But in fact, in recent decades, the concept of using fighters in the United States was based on avoiding close combat. And suddenly they simulate the opposite. Question.
  4. +1
    23 March 2021 07: 04
    Firstly, not "artificial intelligence", but its analogue, because the exact formulation does not even exist what a real AI is! Secondly, what is the power of the computer itself, maybe this is a complex that takes up a huge room, and then its pseudo-victory is understandable! Thirdly, programs are created by people, but in life there are no ideal conditions, a factor that is ignored in the program or is absent - heavy rain and wind, for example - can play a decisive role. So, all these games are just for self-support of "computer generators" that they can write such programs, what a great !!!
    1. +2
      23 March 2021 07: 36
      Secondly, what is the power of the computer itself, maybe this is a complex that takes up a huge room, and then its pseudo-victory is understandable!

      Even so, although the process of miniaturization is proceeding at a tremendous pace, it is sufficient to remotely control both one UAV and a swarm of UAVs. Communication channels, as practice has shown, can be fully protected.
    2. 0
      23 March 2021 08: 59
      Tesla's AI occupies a small part of the trunk .. The plane can afford 30 times more ... just imagine ..
      1. +2
        23 March 2021 14: 04
        Tesla's AI is not in the trunk like other auto-pilot cars.
        Here is the whole AI, and duplicated, two identical chips work in parallel and check each other.
  5. +3
    23 March 2021 07: 06
    All this is certainly good, but no matter how it worked out in the end, "It was smooth on paper, but they forgot about the ravines." I am interested in the question of whether it is possible to hammer into the program all the nuances that can occur in reality and to which a person will react in any case (sooner or later), but AI is not, because is it not provided by the program?
    1. +1
      23 March 2021 10: 39
      over time, on the basis of trial and error, taking into account 99% of all possible situations ... but there is a high probability that in the remaining 1% a person will behave in the same way as a machine ... and most likely the percentage of such situations will be much higher)
  6. +3
    23 March 2021 07: 12
    For this, tests are carried out in order to find white spots "not foreseen by the program".
    The usual thing.
    First, in highly idealized conditions, then more and more close to real ones.
  7. 0
    23 March 2021 07: 29
    US virtual dogfight: AI fighter defeats real pilot 5-0

    An interesting version, but ... requires confirmation in real performance.
    To clarify, I doubt that the technology created earlier, based on pilot control, will show an overwhelming advantage over the same, under the control of AI ... about the technology created under AI, we will talk when it appears in real life.
  8. +1
    23 March 2021 07: 33
    Maybe the pilot is still bad at the game. It is necessary not five games, but at least 25 and after that watch how the intellect copes.
    1. +3
      23 March 2021 09: 10
      In the war, the pilot is also 25 attempts to give?
      1. 0
        23 March 2021 09: 48
        There will be an algorithm of actions against such systems.
  9. -4
    23 March 2021 08: 17
    Che is there to talk about overloads and so on. Computers also do not like overloads, they also suffer from temperature changes, etc. Just grab a fist, one fist, and hit your laptop. With a high probability, it will break forever. After such a blow, if a person falls (let's say you are an athlete), then he will most likely live, and maybe heap in response.

    Of course, you can make the computer so that it withstands certain influences, well, the pilot in the cockpit does not fly naked and has special training. And - in secret - the glider also has an overload limit and all the equipment that is stuck into it - from the engine to the light bulb. Well, yes, in general, an airplane without a pilot will be lighter than a similar one. but with a pilot. The truth is, if it is controlled from the ground or another plane. If it is necessary that he make decisions himself, then it is no longer a fact that - more on that later.

    What did these "tests" show? That in the game engine a bot with "AI" can fly around the pilot? This has always been known, especially if the bot is cheated and the pilot is new to the game. How could AI help overload resistance in this fight? He could fly around a person in close combat, dogfight. Didn't the Americans propagandize the idea that dogfight is outdated and that rockets rule? Well, yes, they rule. The rocket is the very same aircraft with AI, just one-time use. And it is lightweight and can - in theory - maneuver better than a relatively heavier fighter.

    Why didn't the missiles defeat everyone? Because in addition to a small supply of energy (it depends on which rocket) they have few brains. What are the (by weight) brains of the pilot? Approximately 1,5 kg + accompanying equipment in the form of the pilot himself - another 100 kg approximately. What is the computational power of the pilot? It is difficult to say, a person cannot be measured in flops, but it can be roughly estimated. Have you all seen the captcha with pictures of buses and bicycles? She, if anything, is designed to separate robots from people. Those. a typical modern recognition algorithm is not able to solve it. Inacheb she was different.

    More or less intelligent computer brains are called "datacenters" and yes, some tasks can be solved very quickly. But more complex ones are slower. And, in the perspective of complicating tasks, they lose to a person. The captchas are the same) Sobsno Turing test he's about this. While devouring a bunch of electricity, needing temperature control and so on. The datacenter, of course, can be lifted into the air. The Boeing 747 will not fit. But you can forget about super-maneuverability - the Boeing itself does not know how. And the datacenter won't like it either.

    Human beings have 86 billion neuron processors. And everyone works, though not very quickly (they want to see how), but in parallel. And they are sharpened by evolution for their functions. For millions of years. We have been building computers for less than 100 years and have achieved a lot. But so far, they beat a person only where the space for making a choice is limited. Chess. Computer games. In reality, like this:
    1) Artificial intelligence is: Does the robot carry footprints in places where the human "pollinator" would simply not be able to make them?
    2) The Su-57 aircraft is: "resurrection, flying, safe, life-resistant, maneuverable, being a fighter", "pressurized, especially economical"
    3) DARPA is: Solves the problem of long and sluggish holidays

    These, if I may say so, "thoughts" were expressed by one of the best models - GPT-3, trained for 80 billion tokens. I entered "concept this:" and clicked "add". In theory, the AI ​​should understand that it is about the definition of a concept. At least the authors say so.

    But alas. You can find it in the internet and play it yourself.

    So, of course, AI on board is needed. Solve AI problems. And man must decide human.
    1. +4
      23 March 2021 09: 02
      Any specialized technology will bypass a person. The human brain is too universal, from this it will not be able to be on top of anything .. I do not write about the unique .. for that they are unique. But you won't even be able to bypass the calculator in terms of the speed of calculating the root of a number .. What is really here.
      1. 0
        23 March 2021 09: 43
        Why can't I? Easily get around. When calculating the root of 4. For the carculator needs to spend at least some time on the calculation, but I know the result in advance. And at the same time, I do not consider myself unique. However, all that I can do well, at the same time, no other person in the world can do it just as well.
        1. +1
          23 March 2021 14: 05
          Good calculators have memory too
  10. +3
    23 March 2021 10: 38
    The mothballs were blown away, here's the news on VO. Whisper.
  11. +2
    23 March 2021 10: 38
    This is a kapets. A person dies away (as a class).
    1. -1
      23 March 2021 19: 14
      It does not die off, it simply creates a replacement for itself for a very distant future. Russia is working in this direction.
      "Scientists of the NRC" Kurchatov Institute "as part of a research group have received promising material for the creation of self-learning neuromorphic computing systems. Their energy efficiency exceeds traditional computers by several orders of magnitude.

      The new material, manufactured using the original technology, has all the necessary characteristics to simulate the neural contacts of a biosimilar computing system. On its basis, a prototype of a self-learning robotic device was created.
      According to the scientist, these technologies open up tremendous opportunities for solving various cognitive tasks, such as text and speech processing, pattern recognition, and decision making.

      The new material is a layered structure of lithium niobate with the inclusion of metal nanocrystals. It was made by sequential deposition of thin films by ion-beam deposition. The researchers found that the concentration of metallic nanocrystals in a material has a significant effect on its properties. By varying the composition of thin films, they were able to select “ideal” parameters at which the nanocomposite demonstrates the best characteristics.

      The material developed by the scientists mimics artificial neural contacts much more accurately than traditional metal oxide semiconductors. The reason for this is the mechanisms by which this nanocomposite functions in a neuromorphic device. As in a living system, signal processing here has an analog character, and the change in the strength of the connection between artificial neurons is carried out by changing the concentration and diffusion of ions.

      A distinctive feature of the new material is that the neuromorphic system created on its basis is capable of imitating the "plasticity" property of the brain. So, in a living organism, dopamine can regulate the strength of interaction between neurons, and in a new device, artificial neurotransmitters play this role.

      The advantage of such a system is that the learning process does not require the presence of a "teacher". The device adjusts itself by interacting with the environment, which is the source of the so-called "rewards" and "punishments".

      Based on the new nanocomposite, scientists have created arrays of memristors and constructed prototypes of neuromorphic systems. Pilot tests have shown that such biosimilar devices are able to "adapt" to different initial conditions. This opens up prospects for the creation of large neuromorphic systems with the ability to self-learn. "
  12. +1
    23 March 2021 21: 49
    There was a full long movie about this training fight.
    At first, several AI programs "fought" with each other.
    Then the winner fought with an experienced instructor pilot.
    The planes are the same: F-16. Close combat.
    The pilot never managed to outplay the AI.
  13. -1
    24 March 2021 00: 15
    Bullshit. We played a flight simulator on the computer. Here xs, but I'm sure that when I play cards on my computer, the computer knows my cards and I don't. Moreover, the computer knows what will come in the deck. So the virtual fight is bullshit. The computer will play along with itself 100%.
  14. DMi
    +1
    24 March 2021 00: 22
    Comp has long ago outplayed man in chess and the Japanese game of go. Top protein players also boasted that they had "intuition" - "creativity", and stupid algorithms, "if-then" would never be able to play as a person. Yes, as a person, they could not play, they learned to play two heads better. Now the world champions in both games meekly come to a regular computer for advice and training. It seems to me that the task of air combat is an order of magnitude easier than a chess game, if not by two orders of magnitude. The only practical problem is getting the AI ​​reliable information about the environment in real time. And he will make decisions about what to do both faster and more precisely than a person, even in a dog dump. This is quite obvious.
    1. 0
      24 March 2021 01: 40
      The only practical problem is getting the AI ​​reliable information about the environment in real time. With this, there is no longer any problem for this, for this he will have all the avionics of the aircraft at his disposal, as well as communication with satellites by ground-based radars by other aircraft, etc.
      1. DMi
        0
        24 March 2021 09: 20
        This information is clearly insufficient for conducting close air combat. In addition, any satellite and any avionics can be crushed by electronic warfare, which is what they are doing with drones now. Ground robots and cars without a driver are guided by optical channels. Army AI will also be taught to work through optics, everything else as auxiliary. And when this happens, it is better for a person not to appear in the database zone. He won't have any chance. Whether in the pilot's seat, or on foot,
  15. 0
    25 March 2021 05: 44
    We also forgot to train pilots to work with this AI.
  16. -1
    25 March 2021 05: 52
    "Khibiny" on the Su-35S and on the Su-34 will suppress the signals of enemy aircraft replies to the requests of the F-16 "Friend or Foe" system. And all flying machines for this "artificial intelligence" will become opponents. The American pilot will suddenly discover that he is surrounded by "Russian" fighters, and will be forced to either take a fight with his own, or go away with afterburner.
    1. 0
      April 6 2021 11: 26
      these are wet fantasies based only on the propaganda of some kind of fantastic possibilities of "khibiny"
  17. 0
    April 6 2021 11: 25
    in principle, an unmanned fighter is superior to a manned fighter, but with only one drawback. namely, the problem is in deciding to attack a particular target. how is the algorithm