Military Review

Artificial Intelligence. Part One: The Path to Superintelligence

Artificial Intelligence. Part One: The Path to Superintelligence

The reason this (and others) article came into being is simple: perhaps artificial intelligence is not just an important topic for discussion, but the most important in the context of the future. All those who get at least a little into the essence of the potential of artificial intelligence, recognize that it is impossible to ignore this topic. Some - including Elon Musk, Stephen Hawking, Bill Gates, not the most stupid people of our planet - believe that artificial intelligence represents an existential threat to humanity, comparable in scale to the complete extinction of us as a species. Well, sit back and put all the dots for yourself over i.

“We are on the verge of change, comparable to the birth of human life on Earth” (Vernor Vinge).

What does it mean to stand on the threshold of such changes?

It seems to be nothing special. But you must remember that being on a chart in such a place means that you do not know what is on your right. You should feel like this:

Feelings are quite normal, the flight is successful.

Future is coming

Imagine that a time machine took you to 1750 - at a time when the world was experiencing constant interruptions in the supply of electricity, the connection between the cities implied shots from a cannon, and all the transport worked on hay. Suppose you get there, take someone and bring in 2015, to show how it is here. We are not able to understand what it would be like for him to see all these shiny capsules flying along the roads; talk with people across the ocean; look at sports games a thousand kilometers away; hear a musical performance recorded 50 years ago; play with a magic rectangle that can take a picture or capture a live moment; build a map with a paranormal blue dot indicating its location; look at someone’s face and communicate with him for many kilometers and so on. All this is inexplicable magic for almost three hundred years old people. Not to mention the Internet, the International Space Station, the Large Hadron Collider, the nuclear weapons and general theory of relativity.

Such an experience for him will not be surprising or shocking - these words do not convey the essence of mental collapse. Our traveler may even die.

But there is an interesting point. If he returns to the 1750 year and he becomes jealous that we want to look at his reaction to the 2015 year, he can take a time machine with him and try to do the same, say, the 1500 year. Arrive there, find a person, pick up the year in 1750 and show it all. The guy from 1500, the year will be shocked immensely - but is unlikely to die. Although he will certainly be surprised, the difference between 1500 and 1750 year is much less than between 1750 and 2015. A man from 1500 of the year will be surprised at some moments from physics, will be amazed at what Europe has become under the tough fifth of imperialism, will draw in his head a new map of the world. But 1750’s daily life — transportation, communications, and so on — is unlikely to surprise him to death.

No, for the guy from 1750 to have fun just like him and me, he should go a lot further - perhaps a year like that in 12 000 BC. e., even before the first agricultural revolution allowed the emergence of the first cities and the concept of civilization. If someone from the world of hunter-gatherers, from the time when people were still another animal species, saw the huge human empires of 1750, with their high churches, ships crossing the oceans, their concept of being “inside” the building, everything this knowledge - he would have died, most likely.

And then, after death, he would envy and wanted to do the same. Would return to 12 000 years ago, in 24 000 year BC. er., would take a man and dragged him in his time. And the new traveler would say to him: “Well, that’s ok, thank you.” Because in this case, a man from 12 000 year BC. er one would have to go back to 100 000 years ago and show the local aborigines the fire and language for the first time.

If we need to transport someone to the future, so that he was surprised to death, the progress must pass a certain distance. Mortal Progress Point (TSP) must be achieved. That is, if at the time of hunter-gatherers TSP occupied 100 000 years, the next stop took place already in 12 000 BC. er Behind it, progress was already faster and radically transformed the world to the 1750 year (approximately). Then it took a couple of hundred years, and here we are.

This picture - when human progress moves faster as time goes by - futurologist Ray Kurzweil calls the law the accelerating returns of human stories. This happens because more advanced societies have the ability to move progress at a faster rate than less developed societies. The people of the 19 century knew more than the people of the 15 century, so it is not surprising that progress in the 19 century went faster than in the 15 century, and so on.

On a smaller scale, this also works. The film "Back to the Future" was released in the 1985 year, and the "past" was in the 1955 year. In the film, when Michael J. Fox returned to 1955 the year, he was taken aback by the novelty of televisions, the price of soda, the lack of love for guitar sound and variations in slang. It was a different world, of course, but if the film was shot today, and the past was in 1985, the difference would be much more global. Marty McFly, a thing of the past from the time of personal computers, the Internet, mobile phones, would be much more out of place than Marty, who went to 1955 from 1985.

All this is due to the law of accelerating returns. The average speed of progress between 1985 and 2015 years was higher than the speed from 1955 to 1985 years - because in the first case the world was more developed, it was saturated with the achievements of the last 30 years.

Thus, the more achievements, the faster changes occur. But shouldn't this leave us some hints for the future?

Kurzweil suggests that the progress of the entire 20 century could have been completed in just 20 years at the level of 2000 of the year - that is, in 2000, the rate of progress was five times higher than the average rate of progress of the 20 century. He also believes that the progress of the entire 20 century was equivalent to the progress of the period from 2000 to 2014 year, and the progress of another 20 century would be equivalent to the period before 2021 of the year - that is, in just seven years. After several decades, all the progress of the 20 century will take place several times a year, and then - in just a month. Ultimately, the law of accelerating returns will bring us to the point that over the entire 21 century, progress will be 1000 times the progress of the 20 century.

If Kurzweil and his supporters are right, the 2030 year will surprise us just as the 1750 guy would have surprised our 2015 - that is, the next TSP will only take a couple of decades - and the 2050 world of the year will be so different from the modern one that we hardly will find out. And this is not fiction. So believes many scientists who are smarter and more educated than you and me. And if you look at the story, you will understand that this prediction is derived from pure logic.

Why, then, when we are confronted with statements like “the world will change beyond recognition in 35 years,” we are skeptical about our shoulders? There are three reasons for our skepticism about future predictions:

1. When it comes to history, we think in straight chains. Trying to present the progress of the next 30 years, we look at the progress of previous 30 as an indicator of how much everything is likely to happen. When we think about how our world will change in the 21 century, we take the progress of the 20 century and add it to the 2000 year. The same mistake is made by our guy from 1750, when he gets someone from 1500, and tries to surprise him. We intuitively think in a linear fashion, although we should be exponential. Essentially, the futurologist should try to predict the progress of the next 30 years, not looking at the previous 30, but judging by the current level of progress. Then the forecast will be more accurate, but still past the gate. To think about the future correctly, you need to see the movement of things at a much faster pace than they are now.

[/ Center]

2. The trajectory of recent history often looks distorted. First, even a steep exponential curve seems linear when you see small parts of it. Secondly, exponential growth is not always smooth and uniform. Kurzweil believes that progress is moving snake-like curves.

This curve goes through three phases: 1) slow growth (early phase of exponential growth); 2) rapid growth (explosive, late phase of exponential growth); 3) stabilization in the form of a specific paradigm.

If you look at the last story, the part of the S-curve in which you are currently located can hide the speed of progress from your perception. Part of the time between 1995 and 2007 years was spent on the explosive development of the Internet, the presentation of Microsoft, Google and Facebook to the public, the birth of social networks and the development of cell phones, and then smartphones. This was the second phase of our curve. But the period from 2008 to 2015 was a less breakthrough year, at least on the technological front. Those who think about the future today may take the last couple of years to assess the overall pace of progress, but they do not see the bigger picture. In fact, the new and powerful 2 phase can be brewing now.

3. Our own experiences make us grumbling old men when it comes to the future. We base our ideas about the world on our own experience, and this experience has set the growth rate in the recent past for us as "taken for granted." Our imagination is also limited, because it uses our experience to predict - but more often we simply do not have the tools that allow us to accurately predict the future. When we hear predictions for the future that diverge from our daily perception of the work of things, we instinctively consider them naive. If I told you that you will live to 150 or 250 years, and maybe you will not die at all, you will instinctively think that "this is stupid, I know from history that everyone died during this time." So it is: no one lived to such years. But no aircraft flew until the invention of the aircraft.

Thus, while skepticism seems reasonable to you, it is most often wrong. We should accept that if we are armed with pure logic and we are waiting for the usual historical zigzags, we must recognize that very, very, very much has to change in the coming decades; much more than you can imagine intuitively. The logic also suggests that if the most advanced view of the planet continues to make giant leaps forward, faster and faster, at some point the jump will be so serious that it will radically change the life we ​​know it. Something similar happened in the process of evolution, when man became so clever that he completely changed the life of any other species on planet Earth. And if you spend a little time reading what is happening now in science and technology, you may begin to see certain clues about what the next giant leap will be.

The path to superintelligence: what is AI (artificial intelligence)?

Like many on this planet, you are used to thinking that artificial intelligence is a stupid idea of ​​science fiction. But lately, a lot of serious people have shown concern about this stupid idea. What's wrong?

There are three reasons that lead to confusion around the term AI:

We associate AI with movies. "Star Wars". "Terminator". "Space Odyssey 2001 of the Year." But like robots, the AI ​​in these films is fiction. Thus, the Hollywood tapes dilute the level of our perception, the AI ​​becomes habitual, familiar and, of course, angry.
This is a wide field for application. It starts with a calculator in your phone and the development of self-driving cars and comes to something far in the future that will drastically change the world. AI means all of these things, and it is confusing.
We use AI every day, but often we do not even give ourselves a report on this. As John McCarthy said, the inventor of the term “artificial intelligence” in 1956, “as soon as he started working, no one else calls him AI”. AI has become more like a mythical prediction about the future, rather than something real. At the same time, this title also has a taste of something from the past that has never become a reality. Ray Kurzweil says that he hears people associating AI with facts from 80's, which can be compared with "the statement that the Internet died with dotcoms in the beginning of 2000's."
Let's be clear. First, stop thinking about robots. A robot that is a container for AI sometimes mimics the human form, sometimes it does not, but the AI ​​itself is a computer inside the robot. The AI ​​is the brain, and the robot is the body, if it has this body at all. For example, Siri software and data is artificial intelligence, a woman’s voice is the personification of this AI, and there are no robots in this system.

Secondly, you must have heard the term "singularity" or "technological singularity." This term is used in mathematics to describe an unusual situation where ordinary rules no longer work. In physics, it is used to describe the infinitely small and dense point of a black hole or the original point of the Big Bang. Again, the laws of physics do not work in it. In 1993, Vernor Vinge wrote a famous essay in which he applied this term to the moment in the future when the intelligence of our technologies surpasses our own - and at that moment life as we know it will change forever and the usual rules of its existence will no longer work . Ray Kurzweil further clarified this term, indicating that the singularity will be achieved when the law of accelerating returns reaches an extreme point, when technological progress will move so fast that we will stop noticing its achievements, almost infinitely quickly. Then we will live in a completely new world. However, many experts have stopped using this term, so let us and we will not refer to it often.

Finally, although there are many types or forms of AI that derive from the broad notion of AI, its main categories depend on caliber. There are three main categories:

Narrowly directed (weak) artificial intelligence (AII). CII specializes in one area. Among such AI there are those who can beat the world chess champion, but that's all. There is one that can offer the best way to store data on a hard disk, and that’s it.
General (strong) artificial intelligence. Sometimes also called human-level AI. AIS is referred to as a computer that is intelligent, like a person — a machine that is capable of performing any intellectual action inherent in man. It is much more difficult to create an OII than an AII, and so far we have not reached it. Professor Linda Gottfredson describes intelligence as “in a general sense, psychic potential, which, along with other things, includes the ability to reason, plan, solve problems, think abstractly, understand complex ideas, learn quickly and learn from experience.” OII should be able to do all this as easily as you do.
Artificial superintelligence (ICI). The Oxford philosopher and theorist of AI, Nick Bostrom, defines superintelligence as “intellect, which is much smarter than the best human minds in almost any field, including scientific creativity, general wisdom and social skills”. Artificial super-intelligence includes both a computer that is a little bit smarter than a person, and one that is trillions smarter in any direction. The ISI is the reason for the growing interest in AI, as well as the fact that in such discussions the words “extinction” and “immortality” often appear.
Nowadays, people have already conquered the very first step of the AI ​​caliber - AII - in many ways. The AI ​​revolution is the path from AII through IES to CII. We may not survive this path, but it will definitely change everything.

Let's take a close look at how leading thinkers in this area see this path and why this revolution can happen faster than you might think.

Where are we in this stream?

Narrow-focused artificial intelligence is machine intelligence, which is equal to or exceeds human intelligence or efficiency in performing a specific task. A few examples:

* Cars are jam-packed with AII systems, from computers that determine when the anti-lock braking system should work, to a computer that determines the parameters of the fuel injection system. Google’s self-driving cars, which are currently undergoing testing, will contain robust FID systems that will perceive and respond to the world around them.

* Your phone is a small UII factory. When you use the maps app, get recommendations for downloading apps or music, check the weather for tomorrow, speak with Siri, or do something else, you are using PCB.

* Your email spam filter is a classic type of AII. He begins by figuring out how to separate spam from usable emails, and then learns how to process your emails and preferences.

* And this is an embarrassing feeling when yesterday you were looking for a screwdriver or a new plasma in a search engine, and today you see offers of helpful stores on other sites? Or when in the social network you are recommended to add interesting people as friends? All these are FIA ​​systems that work together, determining your preferences, fusing information about you from the Internet, getting closer and closer to you. They analyze the behavior of millions of people and draw conclusions based on these analyzes so as to sell the services of large companies or make their services better.
* Google Translate is another classic AII system, impressively good at certain things. Voice recognition - too. When your plane lands, the terminal is not determined for him by humans. Ticket price - too. The best in the world of checkers, chess, backgammon, bald and other games today are represented by highly targeted artificial intelligence.
* Google search is one giant AIM that uses incredibly clever methods to rank pages and determine the results of search results.

And this is only in the consumer world. Complex FID systems are widely used in the military, manufacturing and financial industries; in medical systems (remember IBM's Watson) and so on.

UIA systems in this form do not pose a threat. In the worst case, a buggy or poorly programmed AII can lead to local disaster, create power outages, derail financial markets, and the like. But although AII does not have the authority to create an existential threat, we must see things more broadly — a crushing hurricane awaits us, foreshadowed by AII. Each new innovation in the field of AII adds one block to the path leading to the AIS and CII. Or, as Aaron Sayents well noted, the AII of our world is similar to the “amino acids of the primary broth of the young Earth” - while the non-living components of life that one day wake up.

The path from AII to OII: why is it so difficult?

Nothing reveals the complexity of human intelligence, as an attempt to create a computer that will be just as smart. Building skyscrapers, flying into space, the secrets of the Big Bang are all nonsense compared to repeating our own brain or at least just understanding it. Currently, the human brain is the most complex object in the known Universe.

Perhaps you do not even suspect the difficulty of creating an OII (a computer that will be smart, like a person, in general, and not just in one area). Creating a computer that can multiply two ten-digit numbers in a split second is easier than ever. It is incredibly difficult to create one who can look at a dog and a cat and say where the dog is and where the cat is. Create an AI that can beat the grandmaster? Made by Now try to get him to read a paragraph from a book for six-year-old children and not only understand the words, but also their meaning. Google is spending billions of dollars trying to do this. With complex things - like computing, calculating strategies of financial markets, translating a language - the computer copes with it easily, but with simple things - sight, movement, perception - no. As Donald Knut put it, “AI now does almost everything that requires“ thinking ”, but cannot cope with what people and animals do without thinking.”

When you think about the reasons for this, you will understand that things that seem to us the simplest in execution, only seem so, because they have been optimized for us (and animals) in the course of hundreds of millions of years of evolution. When you stretch your hand to an object, the muscles, joints, bones of your shoulders, elbows and hands instantly perform long chains of physical operations that are in sync with what you see and move your hand in three dimensions. It seems simple to you, because the perfect software of your brain is responsible for these processes. This simple trick allows you to make the procedure for registering a new account with entering a crookedly written word (captcha) simple for you and a hell for a malicious bot. For our brain, this is nothing complicated: you just need to be able to see.

On the other hand, the multiplication of large numbers or the game of chess are new types of activity for biological beings, and we did not have enough time to perfect ourselves in them (not millions of years), so the computer is easy to beat. Just think about it: would you prefer to create a program that can multiply large numbers, or a program that recognizes the letter B in its millions of spellings, in the most unpredictable fonts, by hand or stick in the snow?

One simple example: when you look at it, you and your computer understand that these are alternating squares of two different shades.

But if you remove black, you will immediately describe the full picture: cylinders, planes, three-dimensional angles, but the computer will not be able to.

He will describe what he sees as a variety of two-dimensional forms in different shades, which, in principle, is true. Your brain is doing a ton of work, interpreting the depth, the play of shadows, the light in the picture. Below in the picture the computer will see a two-dimensional white-gray-black collage, whereas in reality there is a three-dimensional stone.

And all that we have just identified, it is the tip of the iceberg relating to the understanding and processing of information. To get to the same level with a person, a computer must understand the difference in subtle facial expressions, the difference between pleasure, sadness, satisfaction, joy, and why Chatsky is good, and Molchalin - not.

What to do?

The first step to creating OII: increasing computational power

One of the necessary things that must happen in order for the AIS to become possible is an increase in the power of computer equipment. If an artificial intelligence system needs to be as smart as a brain, it needs to match the brain with raw computational power.

One way to increase this ability is in the total number of calculations per second (OPS) that the brain can produce, and you can determine this number by finding out the maximum number of OPS for each brain structure and putting them together.

Ray Kurzweil came to the conclusion that it is enough to take a professional assessment of the OPS of one structure and its weight relative to the weight of the whole brain, and then multiply proportionally to get an overall assessment. It sounds a bit doubtful, but he did it many times with different estimates of different areas and always came to the same number: of the order of 10 ^ 16, or 10 quadrillion OPS.

The fastest supercomputer in the world, the Chinese Tianhe-2, has already bypassed this number: it is able to perform 32 quadrillion operations per second. But Tianhe-2 occupies 720 square meters of space, eats 24 megawatts of energy (our brain consumes just 20 watts) and costs 390 million dollars. Commercial or widespread use is not in question.

Kurzweil suggests that we evaluate the state of computers by how many OPS you can buy for 1000 dollars. When this number reaches the human level - 10 quadrillion OPS - OII may well become part of our lives.

Moore's Law - a historically reliable rule that determines that the maximum computing power of computers is doubled every two years - implies that the development of computer technology, like the movement of a person through history, grows exponentially. If we compare this with the rule of thousands of Kurzweil dollars, we can now afford 10 trillions of OPS for 1000 dollars.

The exponential growth of computing: 20-21 century. Right slide rule and on it - the brain of an insect, mouse, man and all people; on the left, calculations per second for 1000 dollars; bottom - year

Computers for 1000 dollars by their computational abilities bypass the mouse brain and a thousand times weaker than humans. This seems like a bad indicator until we remember that computers were a trillion times weaker than the human brain in 1985, in a billion - in 1995, and in a million - in 2005. By 2025, we have to get an affordable computer that is not inferior in computing power to our brain.

Thus, the raw power required for OII is already technically available. Within 10 years, it will come out of China and spread around the world. But computing power alone is not enough. And the next question: how can we provide the human level intelligence with all this power?

The second step to create OII: give it a reason

This part is quite complex. To tell the truth, no one really knows how to make a car intelligent - we are still trying to figure out how to create a human-level mind that can distinguish a cat from a dog, highlight B drawn in the snow, and analyze a second-rate film. However, there are a handful of forward-thinking strategies, and one fine moment one of them should work.

1. Repeat brain
This option is similar to the fact that scientists are sitting in the same class with a child who is very clever and responds well to questions; and even if they diligently try to comprehend science, they do not even catch up with the clever child. In the end, they decide: to hell, just write off the answers to his questions. This makes sense: we cannot create a highly complex computer, so why not take one of the best prototypes of the universe as a basis: our brain?

The scientific world is working hard, trying to figure out how our brain works and how evolution has created such a complex thing. According to the most optimistic estimates, they will succeed only by 2030 year. But as soon as we understand all the secrets of the brain, its effectiveness and power, we can be inspired by its methods in creating technologies. For example, one of the computer architectures that mimics the work of the brain is the neural network. She starts with a network of transistors "neurons" connected to each other by input and output, and does not know anything - like a newborn. The system "learns", trying to perform tasks, recognize handwritten text and the like. The connections between the transistors are strengthened in the case of the correct answer and weakened in the event of an incorrect one. After many cycles of questions and answers, the system forms intelligent neural weaves that are optimized for certain tasks. The brain learns in a similar way, but in a much more complex manner, and as we continue to study it, we discover new incredible ways to improve neural networks.

Even more extreme plagiarism involves a strategy called full brain emulation. The goal: to cut a real brain into thin plates, scan each of them, then accurately restore the three-dimensional model using software, and then translate it into a powerful computer. Then we will have a computer that will officially be able to do everything the brain can do: it will just need to learn and collect information. If engineers succeed, they will be able to emulate a real brain with such incredible accuracy that, after downloading to the computer, the real identity of the brain and its memory will remain intact. If the brain belonged to Vadim before he died, the computer will wake up in the role of Vadim, who will now be the OII of the human level, and we, in turn, will transform Vadim into an incredibly intelligent ICI, which he will surely be pleased with.

How far are we from complete brain emulation? In truth, we just emulated the brain of a millimeter flatworm that contains the 302 neuron in total. The human brain contains 100 billions of neurons. If attempts to get to this number seem useless to you, remember the exponential growth rate of progress. The next step will be emulation of the brain of an ant, then there will be a mouse, and then there will be a stone's throw to the person.

2. Try to follow in the footsteps of evolution.
Well, if we decide that the answers of an intelligent child are too complex to write off, we can try to follow in its footsteps of training and exam preparation. What do we know? Building a computer as powerful as the brain is quite possible — the evolution of our own brain has proven this. And if the brain is too complex to emulate, we can try to emulate evolution. The fact is that even if we can emulate a brain, it can be like an attempt to build an airplane by ridiculous waving of hands, repeating the movements of the wings of birds. Most often we manage to create good machines using a machine-oriented approach, rather than an exact imitation of biology.

How to simulate evolution in order to build OII? This method called “genetic algorithms” should work like this: there must be a productive process and its evaluation, and this will be repeated again and again (just as biological beings “exist” and “are evaluated” by their ability to reproduce). A group of computers will perform tasks, and the most successful of them will share their characteristics with other computers, “output”. Less successful will be mercilessly thrown into the dustbin of history. After many, many iterations, this process of natural selection will allow to bring out the best computers. The difficulty lies in the creation and automation of the derivation and evaluation cycles, so that the evolutionary process goes on its own.

The disadvantage of copying evolution is that evolution takes billions of years to do something, and we only need a few decades to do it.

But we have a lot of advantages, unlike evolution. Firstly, it does not have the gift of foresight, it works by chance — it produces useless mutations, for example, and we can control the process within the framework of the tasks set. Secondly, evolution does not have a goal, including the pursuit of intelligence - sometimes in the environment some kind of gains are not at the expense of intelligence (because the latter consumes more energy). We, on the other hand, can aim at increasing intelligence. Thirdly, in order to choose intelligence, evolution needs to make a number of third-party improvements - like redistributing energy consumption by cells - we can simply remove the excess and use electricity. Without a doubt, we will be faster than evolution - but again, it is not clear whether we can surpass it.

3. Provide computers for yourself
This is the last chance when scientists completely despair and try to program a program for self-development. However, this method may be the most promising of all. The idea is that we create a computer that will have two main skills: explore AI and code changes in itself — which will allow it not only to learn more, but also to improve its own architecture. We can train computers to be computer engineers to themselves so that they develop themselves. And their main task will be to figure out how to become smarter. We will talk more about this later.

All this can happen very soon.

The rapid development of hardware and software experiments run in parallel, and the AIS can appear quickly and unexpectedly for two main reasons:

1. The exponential growth is intensive, and what seems to be snail's steps can quickly turn into leaps and bounds - this gif well illustrates this concept:

animated picture:

When will computers surpass human intelligence? The volume of Lake Michigan (in ounces of fluid) is equal to the volume of our brain (in operations per second). Computational power doubles every 18 months. At this pace, you will not see any results for a long time, but then everything will happen instantly.

2. When it comes to software, progress may seem slow, but then one breakthrough instantly changes the speed of moving forward (a good example: in times of geocentric world perception, it was difficult for people to calculate the work of the universe, but the discovery of heliocentrism made everything much simpler). Or, when it comes to a computer that improves itself, everything may seem extremely slow, but sometimes only one amendment in the system separates it from the thousandfold efficiency compared to the person or the previous version.

Road from OII to ICI
At a certain point, we will definitely get OII - general artificial intelligence, computers with a general human level of intelligence. Computers and people will live together. Or will not.

The fact is that OII with the same level of intelligence and computing power as a person will still have significant advantages over people. For example:

Speed. Brain neurons operate at a frequency of 200 Hz, while modern microprocessors (which are significantly slower than what we get at the time of the creation of the OII) operate at a frequency of 2 GHz, or 10 millions of times faster than our neurons. And the internal communications of the brain, which can move at a speed of 120 m / s, are significantly inferior to the ability of computers to use optics and the speed of light.

Size and storage. The size of the brain is limited by the size of our skulls, and it can’t become larger, otherwise it will take too long for internal communications at 120 speeds to travel from one structure to another. Computers can expand to any physical size, use more equipment, increase RAM, long-term memory - all this goes beyond our capabilities.

Reliability and durability. Not only computer memory is more human. Computer transistors are more accurate than biological neurons and are less prone to deterioration (and in general, can be replaced or repaired). People’s brains get tired faster, computers can work non-stop, 24 hours a day, 7 days a week.


The ability to edit, upgrade, a wider range of possibilities. Unlike the human brain, a computer program can be easily repaired, updated, conducted an experiment with it. Modernization may also be subject to areas in which the human brain is weak. The software of the person responsible for the vision is superbly arranged, but from the point of view of engineering, his abilities are still very limited - we see only in the visible spectrum of light.

Collective ability. People are superior to other species in terms of a grand collective mind. Starting from the development of the language and the formation of large communities, moving through the invention of writing and printing, and now being activated by using tools such as the Internet, the collective mind of people is an important reason why we can magnify the crown of evolution. But computers will still be better. A global network of artificial intelligence, working on one program, constantly synchronized and self-developing, will allow you to instantly add new information to the database, no matter where you get it. Such a group will also be able to work on one goal, as a whole, because computers do not suffer from the presence of special opinion, motivation and personal interest, like people.

The AI, which will most likely become OII through programmed self-improvement, will not see the “human-level intellect” as an important milestone - this milestone is important only for us. He will have no reason to stop at this dubious level. And given the advantages that even the human-level OII will have, it is quite obvious that human intelligence will become for him a short flash in the race for superiority intellectually.

This development may surprise us very, very much. The fact is that, from our point of view, a) the only criterion that allows us to determine the quality of intelligence is animal intelligence, which is lower than ours by default; b) for us, the smartest people are ALWAYS smarter than the most stupid. Like that:

That is, while the AI ​​is simply trying to reach our level of development, we see how it becomes smarter, approaching the level of the animal. When he gets to the first human level - Nick Bostrom uses the term “village idiot” - we will be delighted: “Wow, he's already like a moron. Cool! ". The only thing is that in the general spectrum of people's intelligence, from the village idiot to Einstein, the range is small - so after the AI ​​gets to the level of the fool and becomes OII, he will suddenly become smarter than Einstein.

And what will happen next?

Explosion of intelligence

I hope you find it interesting and fun, because it is from this point that the topic we are discussing becomes abnormal and creepy. We should pause and remind ourselves that every fact stated above and further is a real science and real predictions for the future expressed by the most eminent thinkers and scientists. Just keep in mind.

So, as we designated above, all our modern models on achievement of OII include an option when AI improves itself. And as soon as he becomes an OII, even the systems and methods with which he grew up become smart enough to self-improve - if they wish. An interesting concept arises: recursive self-improvement. It works like this.

A certain AI system at a certain level — say, a village idiot — is programmed to improve its own intelligence. Having developed - say, to the level of Einstein - such a system begins to evolve with the intelligence of Einstein, it takes less time to develop, and the leaps are all greater. They allow the system to surpass any person, becoming more and more. As it progresses rapidly, OII soars up to heavenly heights in its intellectuality and becomes the supramental system of ISI. This process is called the explosion of the intellect, and this is the clearest example of the law of accelerating returns.

Scientists argue about how quickly AI will reach the level of OII - the majority believes that OII we will get to 2040 year, in just 25 years, which is very, very little by the standards of technology development. Continuing the logical chain, it is easy to assume that the transition from OII to IIS will also take place extremely quickly. Like that:

“It took dozens of years before the first AI system reached the lowest level of general intelligence, but it finally happened. The computer is able to understand the world around as a four year old man. Suddenly, literally an hour after reaching this milestone, the system produces a great theory of physics, which combines the general theory of relativity and quantum mechanics, which no one can do. After an hour and a half, AI becomes ICI, 170 000 is smarter than any man. ”

To characterize a superintelligence of this magnitude, we do not even have suitable terms. In our world, “smart” means a person with IQ 130, “stupid” - 85, but we have no examples of people with IQ 12 952. Our rulers are not designed for this.

The history of mankind tells us clearly and clearly: together with the intellect, power and strength appear. This means that when we create an artificial superintelligence, it will be the most powerful creature in the history of life on Earth, and all living beings, including humans, will be entirely in his power - and this can happen in twenty years.

If our meager brains were able to come up with Wi-Fi, then something smarter than us a hundred, a thousand, a billion times with ease will be able to calculate the position of each atom in the universe at any given time. All that can be called magic, any power attributed to an omnipotent deity, will all be at the disposal of the ISI. Creating a technology to reverse aging, curing any disease, getting rid of hunger and even death, managing the weather - all of a sudden becomes possible. It is also possible and the immediate end of all life on Earth. The cleverest people of our planet agree that as soon as an artificial superintelligence appears in the world, this will mark the appearance of God on Earth. And the important question remains.

Will he be a good god?

Based on, compiled by Tim Urban. The article uses materials from Nick Bostrom, James Barrat, Ray Kurzweil, Jay Niels-Nilsson, Stephen Pinker, Vernor Vinge, Moshe Vardy, Russ Roberts, Stuart Armstroh and Kai Sotal, Susan Schneider, Stuart Russell and Peter Norwig Tete, Tete, Tete, Armstrong, Tee Sthal, Schneider, Stewart Russell, Peter Norwig, Tete, Tete, Armstrong Marcus, Carl Schulman, John Searle, Jaron Lanier, Bill Joy, Kevin Keli, Paul Allen, Stephen Hawking, Kurt Andersen, Mitch Kapor, Ben Herzel, Arthur Clarke, Hubert Dreyfus, Ted Greenwald, Jeremy Howard.
Dear reader, to leave comments on the publication, you must to register.

I have an account? Sign in

  1. V.ic
    V.ic 6 August 2016 08: 15 New
    that we will receive the OII by 2040, in just 25 years,

    I will not live, however ... That's just IT fortunately or vice versa? Plused it.
    1. bastard
      bastard 6 August 2016 18: 42 New
      Quote: V.ic
      I will not live, however ... But is it only fortunately or vice versa? Plused it.

      Says Dr. Biol. Sciences, Professor S.V. Saveliev.
  2. pafegosoff
    pafegosoff 6 August 2016 08: 16 New
    Well, okay, I looked at the pictures and didn't read the text. A technological singularity is inevitable. Humanity is waiting for the "beautiful far away".
    And we will all die, as Satanic likes to say. Or "we are all monkeys," as Sergei Saveliev recalls.
    1. gladcu2
      gladcu2 6 August 2016 19: 39 New

      Fursenko's dream.

      It would be nice in the form of comics. And the likes under the pictures ...
  3. Simpsonian
    Simpsonian 6 August 2016 08: 22 New
    Computers for $ 1000 by their computational abilities bypass the brain of a mouse and are a thousand times weaker than humans.

    Was the mouse asked? bully Suddenly with your Google which "translates this way" or "Tianhe-2" what's wrong? laughing
    And how it was all calculated (this is not about dollars lol )?
  4. midshipman
    midshipman 6 August 2016 08: 56 New
    Yes, humanity is waiting for a lot of interesting things. If only we Russian scientists would be given the opportunity to invent and create, as it was in the USSR. I remember for my first invention (antenna on a LDPE tube) I was able to buy a cooperative apartment. Nothing for the invention of choosing the true altitude (out of 3 heights) for the landing approach of our MiG and Su on the aircraft carrier. There is nothing for a multifunctional AFS for the Su-50 either. In 2015, the PRC received and introduced 1 million inventions, while in the Russian Federation only 28 thousand. My colleagues created the Elbrus supercomputer, and now what are we doing. I have the honor.
    1. gridasov
      gridasov 6 August 2016 12: 29 New
      And now we can’t advance the basic principles of creating fundamentally new turbines on which you can create new aircraft engines and propulsors in water. We can’t promote the concept of fundamentally new electric machines with rotating magnetic break-in torque. We can’t promote fundamentally new induction devices and the method of volumetric circuits instead circuitry on flat boards. And most importantly, we cannot advance the foundations of artificial intelligence in the form of a uniquely new property of a number built on its function, which has never been used before. But this does not mean that it is worth getting upset. This means that everything has its time.
    2. atos_kin
      atos_kin 6 November 2016 10: 37 New
      Quote: midshipman
      and now what are we doing

      They want to make a new nation. AI in shock
  5. ML-334
    ML-334 6 August 2016 10: 04 New
    The article is of course crap, artificial intelligence is a person, at a level higher than the first Terminator and lower than the second in transformation, that is, we can be transformed into a certain mechanism. That is, we are created by the Creator and carry in ourselves a certain program that corresponds with the commandments of God.
    1. ML-334
      ML-334 6 August 2016 13: 12 New
      Humanity is moving downward in development, in principle, Adam and Eve were the ideal until they plucked the forbidden fruit. The first civilizations, in my opinion, had technologies that allowed movement in space without flying machines and communication with thought, and now in a parallel "afterlife" communication at such a level. To the question - where did the developed civilizations go, I will answer - the Creator cleaned up, imagined themselves to be Gods, did not honor the Creator's commandments. In subsequent generations, the Creator kept the brain blocked, the Reason locked. And yet, in my opinion, if it were not for Jesus the Creator and He would have cleaned us, and for this he simply pressed a key. Masterpieces of developed eras reach us, but not a single mechanism embodied in a heap of scrap metal has reached.
      1. ML-334
        ML-334 6 August 2016 15: 28 New
        Well, surely everyone had the feeling that it already was and you know what will happen in the next moment. Isn't that a program? In the process of life, an adjustment is being made, in my opinion for the worse, although not a fact, this applies to tribal curses. Let's say my ancestor, in the year 1850-1860, saw the killing of a priest (a cleric seems to be) and did not inform the police, my clan receives from the Council (the body that corrects our program) a sore in the form of CANCER. That is, the gradual extinction of the clan. Communication with the Council goes through the Soul of a person, there is an expression-cry of the Soul. Information is inherent in the Creator, inaccessible to either the Council or the Devil, provided that the person himself He doesn’t want to give her away. In this World, however, he dominates man.And also about transplanting human organs. This is my personal opinion, you shorten the life of your grandson. It is said on the seventh knee that means it will be. At the expense of the pyramids, either the movement mechanism, or someone was driven away using Darwin with the introduction of artificial intelligence.
    2. qwerty183
      qwerty183 11 July 2017 15: 40 New
      As soon as AI appears, in its full form, humanity will come to an end. In support of my words I will give an example of the fact that absolutely any person is a set of contradictions and for the most part it’s a destroyer of the environment and the desire to destroy is based on fear and greed. From fear walls and guns are created , out of greed, slavery, hatred and inequality. Look at the same comments from VO users. With what relish we look and read about killing our own kind and creating new types of murders, with what joy we expect our neighbor to be worse than us. And not only here, in everyday life the same thing. Unfortunately, a person is not capable of creating and living in utopia. Any AI will analyze it much faster than people and therefore the idea of ​​the film Terminator has a right to life. I can’t understand one thing, are representatives of other worlds and galaxies, the presence of which, although I have not seen it personally, I have no doubt at all, they are tolerating us. In essence, we, humanity, are a harmful insect and dangerous in the future.
  6. srha
    srha 6 August 2016 10: 32 New
    If extrapolations were true for long periods, if the smart one were strong, if the modern resident was smarter and better able to harness a horse or choose pictograms on an iPhone than a resident of the 15th century, even then the article would not be true.

    "Because of consideration", as in physics there are limitations in the form of conservation laws, and in computer science there are limitations.

    I will cite one thing: the alleged supermind will have supermistakes (they do not become rational without errors - this is the way of gaining and existence of the mind), which will lead to super-stupidity and super-destruction, i.e. the supposed superintelligence is very finite, and the question of the authority of the superintelligence over life becomes only a temporary fiction.

    By the way, nature didn’t go over the project of supermind 150 - 50 thousand years ago - a man of the late Pleistocene era had more developed brains than the modern one, but he went on a project of species stability due to social progress, and not brain development.
    1. Sergey-8848
      Sergey-8848 6 August 2016 12: 10 New
      Here you can add that we ourselves train AI, and protect it from super-errors, thereby digging a hole for ourselves (perhaps, but not at all in the affirmative).
      By the way, there is an article on this topic in the last issue of "PM" with about the same predictions, only shorter and more accessible, or something. There, among other things, the fact is given that recently (in March 2016), a computer for the first time beat a person in th - a game that was previously considered inaccessible to the "iron" mind.
      1. gridasov
        gridasov 6 August 2016 12: 38 New
        Correctly!!! So in the first place you need to improve your brain in order to understand the principles of creating artificial machine intelligence.
    2. gridasov
      gridasov 6 August 2016 12: 36 New
      There are no conservation laws in physics. These are fiction dreamers. In physics, there are laws of energy redistribution in the form of algorithms for their development and transformations, which means that computer science is the same physics but expressed in the language of numbers. And in general it is impossible to say that something is true and good, but something is not good and not true. All events with which we come in contact with both the obvious and the consequence of non-obvious physical processes and phenomena exist in addition to our subjective opinion, which means that ALL should be accepted in the analysis.
      1. V.ic
        V.ic 6 August 2016 13: 45 New
        Quote: gridasov
        There are no conservation laws in physics.

        Is it? fool But this is how: negative
        1.m * v = F * t?
        2. The angle of incidence of a ray of light on a reflective surface suddenly became not equal to the angle of reflection?
        3. Bernoulli's law states: the sum of the static and dynamic pressures at each point in the flow is a constant value. The mathematical expression of the law:
        p + q = p1 + q1 = p2 + q2 = ... = pi + qi = const
        March at the desk, dvoechnik! am
        1. gridasov
          gridasov 6 August 2016 16: 10 New
          In principle, if it weren’t for the last phrase, we could have kept silent.
          All that you wrote about, how laws are just a form of understanding individual private decisions of a process. Moreover, it reflects the level of intellectual abilities of those who interpret this as a law.
          Firstly, since we all on Earth are in dynamic motion relative to the precession axis of the planet’s rotation, the mass of any object in this system is a derivative of centrifugal rotation or a force balanced between gravity and a radial direction not perpendicularly, but along a special trajectory. I’m not talking about the factors of the external influence of other planets and forces. Therefore, theoretically, you can measure not only any object in this system, but also each volume of food you eat. However, for this it is necessary to be able to introduce a lot of input parameters into the analysis system. It is impossible to do this on the function of a variable value of a number since you will get an infinite and indefinite accuracy new mathematical sequence. In short, we need a special fundamental method for analyzing large volumes of information data.
          Further. What is speed? Since distance as a real value is always determined by an arc, and not by a straight line in the framework of a planetary device, it can only be calculated on the basis of a variable radius algorithm that determines each point on this arc, which means that this does not at all correspond to the methodology built on using reference measures of dimension. So is the time. In general, this is precisely why a person is not able to determine many parameters of the universe and high-potential processes.
          The same applies to Bernouli's "law". Static is something that was invented out of laziness or bewilderment. In the nature of events, static can, even in the worst case, be perceived only in relation to individual objects of the system, while the dynamic flow of water or air at each of its points is subject to an interconnected outflow. Therefore, the conclusion is that you do not have your own opinion and you only repeat those who created methods of analysis exclusively for low-potential processes.
          I think I wasted my efforts because you won’t understand what was said. Better I’m going to study your delirium.
          1. V.ic
            V.ic 6 August 2016 16: 41 New
            Quote: gridasov
            I think I wasted my efforts because you won’t understand what was said.

            Where are we ... sirim! recourse
            I will answer you with words from the song of Timur Shaov:
            "Don't poke your mind!
            And do not scare your gang:
            Nietzsche, Fichtel, Hegel with Kant ...
            And Ilyich who joined them! " angry
            We were pleased with your fabrications about the curvature of the shortest distance between two points. Decide, for a start, what geometry is the same for you to the current space-time continuum in the vicinity of the trajectory of the Earth's movement around the Sun: Lobachevsky or Riemann? what
            By the way, you famously did away with the legacy of Jacob Bernoulli ... belay
            It remains to break into dust the law of reflection of light (2). Something there Richard Feynman in his lectures wrote about the shortest distance ...
            Well, to deal with the law of conservation of angular momentum (1) is like "two fingers on the asphalt" for you. sad
            It would be desirable to listen to your undoubtedly wise thoughts about the Lobachevsky / Riemann spaces (2), and to highlight the defeat (2), (3). We wait'c! yes
            1. gridasov
              gridasov 6 August 2016 19: 54 New
              You're right! To analyze the events taking place within the local space, it is not enough to state the very fact of the impulse. You must always see the antecedents and the subsequent process. Therefore, we can talk about the geometry of a completely new quality, within which all existing knowledge has particular solutions. I would call it the geometry of the distribution of potential vectors. Or magnetic fluxes. To make it clearer, look at the lightning. If its total potential determined by the vector and capacity is higher than the potential of the atmosphere, then the type of lightning is more or less straightforward. If the fractal level is equal to the radial potential of the medium, then it "prowls" and looks for the direction of optimal magnetic relationships. Well, it's simple.
              In addition, if you were to add the Lobachevsky geometry to the Poincare conjecture, then you would understand the fallacy of such a geometry in the analysis because it is indefinite in dimension.
            2. gridasov
              gridasov 5 October 2016 10: 44 New
              You persistently press that the new must necessarily destroy the old. I am the opponent of this. My consciousness is tuned to the fact that in the world "everything and everyone" has its place and meaning. It is necessary to expand knowledge, but not only in breadth, but also in depth.
              Any geometry can be expressed as a system of transformation of the dimensions of vectors. And if it is transformed by algorithms. then it becomes elastic. BUT!!! And dynamic in these transformations. And the question is not that someone is stupid and does not know how to express dynamics by number. The question is that this is possible and can already be used. I would certainly show how erroneous the calculations of modern mathematicians have laid the foundations in computer programs simulating material objects in the coordinate system and when rotating about a certain axis of the precession. And then it would be possible to see with our own eyes that each point in such a dynamic rotation of the body causes a change in its polarization in this body, which means it causes tension or weakening of bonds, which also causes the destruction of the object in clearly and precisely calibrated places. By the way, it causes transformation processes precisely because of magnetic force interactions, and not other abstract and unreasonable reasoning.
              An example of the profound errors of mathematicians is the determination of the number Pi. If we consider a circle as a polygon transformable according to the algorithms for changing its dimension, then you can always know exactly this number in one or another numerical value, and not approximately depending on the ratio of the dimension of the circumference and radii (or their number)
        2. gladcu2
          gladcu2 6 August 2016 20: 03 New

          gridasov, issued a philosophical assumption about the absence of conservation laws, which, due to certain restrictions, may well
          give explanations for events that are not understandable.
          1. gridasov
            gridasov 6 August 2016 20: 16 New
            I would like to clarify that I do not deny "laws" as particular definitions for completely limited conditions of the process, in its analysis. And then the presentation of the reasoning really has a philosophical format, but the basis is mathematical analysis. It is simply impossible to come up with the interconnectedness of a complex of phenomena.
            1. V.ic
              V.ic 7 August 2016 07: 18 New
              "In the beginning was word ... "Here it is":
              Quote: gridasov
              There are no conservation laws in physics. These are fiction dreamers
              (1) request
              But when some opponents shed light "and the light became", a modest, very modest clarification followed that actually: repeat
              Quote: gridasov
              (2)I want to clarify that I do not deny "laws" as particular definitions for completely limited process conditions , in his analysis.
              Dear, you did not stipulate the boundary conditions when loudly pronouncing (1) the first phrase, for this you and .... Continue to be more careful when conducting discussions! Nothing personal!hi
    3. g1v2
      g1v2 6 August 2016 13: 09 New
      The article is entertaining, although controversial. Yes, a lot is changing and progress is being made. But on the other hand, watch a flock of chimpanzees and you will see a section of human society. Instincts, behavior, training, feelings. needs and so on. It seems foolish to compare Einstein and a monkey? And on the other hand, they have one need - to have a female, fill the stomach, increase their place in the pack hierarchy, take away living space from other monkeys, play and have fun, have fun, etc. Thousands of years have passed, and what has changed at the core? Monkeys are driving rivals out of their forest to feed themselves there, and the states are driving other states, corporations are squeezing markets. And the differences between the competition of monkey tribes and transnational corporations are few - just a different level, and the process is the same.
      PM technological leap does not mean that the look itself will change. In principle, much that we saw in science fiction films is technologically achievable in the near future. Spaceships, flying cars, artificial intelligence, etc. But will this lead to a change in our species itself? I doubt it very much. A man from 1500 and a man from 2015 have the same needs, instincts, driving incentives - only gadgets and the environment change. Well, it’s like a desert inhabitant who hasn’t seen snow in life, taken to Yakutia. Will he have a shock? Surely, but will it be something beyond? Our urban modern lifestyle is simply our habitat, which we adapt to, like the desert or the jungle. hi
      1. gridasov
        gridasov 6 August 2016 13: 33 New
        A person changes not as a physiological view, but as an intellectual system and in interaction with the living space, where everything changes according to its own algorithms. Therefore, the energy of interactions at each level is changing, but in the combined potential of all interactions, everything remains at a conditionally one level.
        And the main question that any thinking person should ask for his own development is "why are we needed in our existence. In our joys and sufferings. In what determines our actions on the basis of guidelines inherent in feelings and desires."
        1. Simple
          Simple 6 August 2016 14: 47 New
          Quote: gridasov
          “Why are we needed in our existence? In our joys and sufferings. In what determines our actions on the basis of guidelines inherent in feelings and desires.

          With such pacifist views, we will not create artificial intelligence. smile
  7. TOPchymBA
    TOPchymBA 6 August 2016 10: 55 New
    An article from the REN-TV series. As the classic said, "People mixed up, horses ..."
    To begin with, the author needs to determine what he understands by the concept - intelligence. (Desirable to define)
    If the system is designed to solve problems set by an external "operator" and is capable of self-learning and self-reproduction, then this is just a "calculator" of the third generation, well, or "abacus" of the fourth generation. In principle, the thing is useful and safe.
    If a system is capable of self-learning, self-reproduction and, most importantly, self-knowledge and "creativity", then such a system will most likely destroy humanity. Due to the fact that one day he will become smarter than a person and will try to change the world order to which humanity will resist due to its limitations and inertia. Humanity will have little chance of surviving, because by the time the system develops to such an extent that it can "secretly" control humanity, it will no longer be left.
    With "GOD" the author generally bent. If we go to the concept of the author, then "GOD" is a developed intellect, which is fundamentally wrong. "GOD" is more likely according to the terminology adopted in the article "calculator" of the fifth generation.
    And yes, for the information of the author, IQ in no way shows how smart or stupid this or that person is. This is just a quantitative assessment of how successfully a person can solve "non-standard tasks". A typical example of schoolgirls and schoolchildren whose IQ is higher than that of Einstein. In addition, this value is not constant and decreases with age.
    In general, articles of this quality are upsetting. If 3-4 years ago I read articles with great pleasure and especially comments to them by specialists and people who just love to think, now in the overwhelming majority there are no longer articles but "GOV ... stuffing", but comments in general "Northern Fur Animal".
    1. gridasov
      gridasov 6 August 2016 12: 42 New
      These are not people, horses, sweat and blood mixed. It’s just that you don’t know how to see common processes and don’t know how to switch from one process analysis format to another. And that does not make anyone bad or good. This defines each of us as we are. At the same time, our development is limited by many factors and sometimes beyond our control.
      1. Falcon5555
        Falcon5555 7 August 2016 23: 25 New
        Gridasov, what kind of intelligence do you have, natural or artificial?
    2. voyaka uh
      voyaka uh 6 August 2016 12: 46 New
      ... "This is just a quantitative assessment of how successfully a person can solve" non-standard tasks "/////

      Only? If by mind to mean cunning - of course. Any trader in the market will be able to
      to get the owner of the highest IQ bad fruit and earn money on it.
      Merchant can be called smart, and genius - a dumb sucker.
      1. TOPchymBA
        TOPchymBA 6 August 2016 21: 14 New
        Give a definition of an intelligent person to begin with. Then we will speak the same language.
  8. The comment was deleted.
  9. The comment was deleted.
  10. The comment was deleted.
  11. Simple
    Simple 6 August 2016 11: 50 New
    In 2007, the English scientist Stephen Hawking (a lover of simple experiments) invited guests from the future to the party.

    For the purity of the experiment, invitations appeared in the public after the experiment itself.

  12. thinker
    thinker 6 August 2016 12: 40 New
    Imagine that a time machine took you to 1750 - at a time when the world was experiencing constant interruptions in the supply of electricity ...

    I think this is humor, what interruptions during the time of Elizabeth Petrovna? request
    1. V.ic
      V.ic 6 August 2016 13: 49 New
      Quote: thinker
      what interruptions during the time of Elizabeth Petrovna?

      The tragic death of Richmann. August 6, 1753.
    2. Lord blacwood
      Lord blacwood 6 August 2016 14: 30 New
      Quote: thinker
      Imagine that a time machine took you to 1750 - at a time when the world was experiencing constant interruptions in the supply of electricity ...

      I think this is humor, what interruptions during the time of Elizabeth Petrovna? request

      The author "went too far". What were the electricity supplies in 1750?
  13. bunta
    bunta 6 August 2016 13: 27 New
    The mention of Mask and Gates on a par with Hawking (and the mention of Mask first in this trinity), unobtrusively speaks of the level of the article as complete crap.
  14. Mikhail3
    Mikhail3 6 August 2016 14: 17 New
    Do not read. A person who cites a frank scam as an authoritative opinion has major problems with his own intelligence. His discussions about someone else's, and even more so about the artificial, can only be viewed as an anecdote.
    1. Simple
      Simple 6 August 2016 14: 29 New
      Quote: Mikhail3
      major problems with their own intelligence.

      This is called simply: cognitive dissonance. But if we pull everyone who is at least somehow trying to cover this topic, will we go further in creating artificial intelligence?
      1. Stone
        Stone 10 August 2016 22: 28 New
        You can pull it - you can not pull it, you can cover the topic - you can not cover the topic. In any case, "WE" will not advance on the path of creating AI. There are specialists, they work and I assure you that we cannot understand them, and therefore this article is a typical time killer and a complete zero in the bottom line.
  15. Simple
    Simple 6 August 2016 14: 20 New
    This is just a compilation by Tim Urban of all kinds of ideas about artificial intelligence and it has the right to life.

    Although I agree that in the case of Steve Hawking and Alon Musk and the like, he tried to compile the incomparable.

    Thinking of a person is the operation of abstractions, which are a derivative of the identification of a person with himself as a person.
    This is not going deep into the jungle of the fact that a living person is, let's say, a compilation of the spiritual principle on the physical plane.

    The question is how we want to see the process of thinking in a soulless machine,
    after all, things like Google, Twitter and the like are nothing but the simpler way: connecting people's mental abilities into a single information network.
    These products of thinking are also analyzed, which I personally consider to be another step (and in the right direction?) To the creation of artificial intelligence.
  16. cedar
    cedar 6 August 2016 14: 27 New
    "... If our meager brains were able to come up with Wi-Fi, then something smarter than us a hundred, thousand, billion times can easily calculate the position of every atom in the universe at any given time. Everything that can be called magic, any power that is attributed to an omnipotent deity - all this will be at the disposal of the ISI.Creation of technology that reverses aging, treatment of any disease, getting rid of hunger and even death, control of the weather - everything will suddenly become possible. The smartest people on our planet agree that as soon as artificial superintelligence appears in the world, it will mark the emergence of God on Earth, and an important question remains.
    Will he be a good god? "

    What physics is without metaphysics. And here the article came down to the question of God ...
    "In the beginning was the Word, and the Word was with God, and the Word was God."
    There was a beginning, there will be an end.
    Output. At the end there will be a number, and the number will be for "god" .. and the number will be for "god" ..!
    What is the number? The number is human, the number of the beast, the number 666.
    "Here is wisdom."
    "Whoever has a mind, count the number of the beast, for this is a human number", i.e. accessible to human understanding ...
    I will not deprive you, dear ones, of the intellectual pleasure of counting the letters of the three words that sound 666 and entering the Russian alphabet with the result obtained ... Whoever masters it will undoubtedly want to know more about the coming "God" and his kindness to the human race. Apocalypse in his hands.
    We are waiting for the second part. It is desirable to complete the picture in a metaphysical manner, without which the path to the future is closed to humanity.
  17. Lord blacwood
    Lord blacwood 6 August 2016 14: 52 New
    The author is deeply mistaken. If humanity manages to create AGI, it will think the same as we do. And, therefore, he, like us, will not be able to create something smarter than us. He cannot simply because people taught him. The only thing that AI can achieve is equality in intelligence, but in this case it will become like us and think like we do.
    1. Jurkovs
      Jurkovs 6 August 2016 16: 14 New
      The question must be posed broader. Is the human brain capable of creating a device more complex than itself. I think not. We are rather waiting for the biological revolution, a lot of absurdities have accumulated in the human brain and they can be optimized. Under absurdity, I see in many ways the parallel work of the medulla oblongata, midbrain and cerebral hemispheres. Not many people know that the human eye is able to see as a butterfly, as a snake and as a person. Moreover, this different information is perceived by different sections of the brain. Man does not need this information and he does not see it. But when a person becomes blind, sometimes other types of vision appear in him. An absolutely blind person is thrown a tennis ball in the face and he automatically makes a go-ahead with his hand, since snake vision only sees fast-moving objects.
  18. Simple
    Simple 6 August 2016 14: 52 New
    Quote: V.ic
    2. The angle of incidence of a ray of light on a reflective surface suddenly became not equal to the angle of reflection?

    With an angle less than (depending on the surface of your mirror) is understandable.

    But what about the speed of the photon when changing the direction of the motion vector?
  19. zenion
    zenion 6 August 2016 15: 15 New
    No computer can even do what the ordinary brain of a dog does. He will not let know that he is hungry. Do not keep track of body cells and temperature. It will not tell you where the tail is at this time and will not learn to rearrange the paws, as the dog can do. He does not even learn to raise a paw near a pillar, or tree, at the call. It just remains a machine that needs a battery. There will be no batteries and the electronic brain does not work.
    1. gridasov
      gridasov 6 August 2016 16: 12 New
      It is necessary not only to affirm such reasoning, but also to understand why modern computers cannot repeat even the elementary functions of the human brain
    2. Lord blacwood
      Lord blacwood 6 August 2016 19: 43 New
      Quote: zenion
      No computer can even do what the ordinary brain of a dog does. He will not let know that he is hungry. Do not keep track of body cells and temperature. It will not tell you where the tail is at this time and will not learn to rearrange the paws, as the dog can do. He does not even learn to raise a paw near a pillar, or tree, at the call. It just remains a machine that needs a battery. There will be no batteries and the electronic brain does not work.

      A modern computer may let you know that it is running out of battery power. He can carry out commands, actions, BUT WITHIN THE PURPOSE OF THE HUMAN PROGRAM. He cannot be aware, cannot learn new things. The computer only works within the limits of a program specified by a person. This is what distinguishes AI from humans.
      1. gridasov
        gridasov 6 August 2016 20: 06 New
        The fact is that modern software is based on a function of a variable value of a number, this allows it to work as a calculation process. However, the function of a constant value of the number allows you to build a distribution and higher-capacity system, which brings it closer to the analytical principle of operation. That is, from the data complex, the system finds the optimal solutions and the analysis itself. Moreover, such a system operates at the level of comparing the analysis process with respect to systematic mathematical landmarks, which are the components of the structure. Therefore, all the work of the system will be carried out relative to landmarks. This is how a person makes an analysis of the surrounding events regarding landmarks formed by morality, ethics and chosen goals, etc.
        1. Cat man null
          Cat man null 6 August 2016 20: 29 New
          Quote: gridasov
          ... modern software is based on a variable number function ...

          - what ??? !!! belay
          - here is the number: two (2).
          - show me, (fak!), what is its "variable value" and where (fak squared !!) does it come from?

          Gridasov! Run to the room - march !! am
          1. Bloodsucker
            Bloodsucker 6 August 2016 20: 41 New
            Well ... well, you are so formidable .. the man was diligent, the intellect was straining, but where did you send him ???))))))))
            1. Cat man null
              Cat man null 6 August 2016 21: 35 New
              Quote: The Bloodthirster
              man of diligence, intellect strained

              - this is not a person, this is a trollbot (a symbiosis of a bot and a troll)
              - he has no intelligence, by definition, even artificial
              - we are old acquaintances

              That is why and "so" yes
        2. Simple
          Simple 6 August 2016 21: 04 New
          You forgot to take into account the fact that a person is in the informative field of society of their own kind, which greatly affects the analysis and, as a result, the adoption of the right decisions. In addition, a person is often in making decisions under the influence of his experiences (both past and present).

          So there are side events when making decisions.

          Quote: gridasov
          ... This is how a person makes an analysis of surrounding events regarding landmarks formed by morality, ethics and chosen goals, etc.

          quite a bit of.
    3. The comment was deleted.
    4. cyber
      cyber 6 August 2016 23: 01 New
      My Roomba drives to the docking station when it feels "hungry".
      Temperature sensors provide excellent temperature information.
      Inertial sensors can provide information on the trajectory of the tail.
      Dog food = computer battery.
      The fact that while people do not understand what intelligence is, does not mean at all that they will not be able to create it in the future.
  20. Jurkovs
    Jurkovs 6 August 2016 16: 07 New
    The reaction of society is not taken into account. Humanity can stupor from the speed of change and abandon further progress. If there are revolutions, then there are counter-revolutions. While food is scarce, both technology and selection are being improved. When food becomes scarce, the incentive in this area will disappear. Other areas of technology may also suffer the same fate, and progress may stop altogether. Nothing can develop indefinitely, and any exponent is good only in theory, in reality there will always be a point of singularity.
  21. Blackmokona
    Blackmokona 6 August 2016 16: 30 New
    Quote: Jurkovs
    The reaction of society is not taken into account. Humanity can stupor from the speed of change and abandon further progress. If there are revolutions, then there are counter-revolutions. While food is scarce, both technology and selection are being improved. When food becomes scarce, the incentive in this area will disappear. Other areas of technology may also suffer the same fate, and progress may stop altogether. Nothing can develop indefinitely, and any exponent is good only in theory, in reality there will always be a point of singularity.

    AI will create an ever more advanced AI, humanity will no longer have anything to do with it
  22. bk316
    bk316 6 August 2016 16: 57 New
    Not minus only for work.
    The article is large, boring and in my opinion harmful.
    Any specialist in the field of applied mathematics, and even more so in the field of information technology, and even more so in the field of OII, can see that the article is an unsuccessful compilation of many refuted statements and simply insinuations.
    I understand that the topic of OII is for a layperson terra incognita, well, for now, let's confine ourselves to the Strugatsky, Kem, and Asimov. And when the time comes we will come and tell everyone.

    On the invoice (an example, so as not to touch mathematics at all): about IQ
    Read the author’s comments on the tests of Isaac Raven or Wexler - they are quite concise and simple.

    All these tests
    - invalid for 200 points;
    - arranged so that more than 1000 points cannot be obtained at all;

    Therefore, the opus about AI that scored IQ 17000 is just crap for people unfamiliar with IQ testing
  23. Operator
    Operator 6 August 2016 17: 27 New
    AI - for now, a black box.

    Nobody still knows for certain the software (thinking algorithms) and hardware (the device of the brain), except for scattered information.

    Even such a basic (and therefore relatively simple) mechanism of the brain as remembering information is at odds between scientists (up to attributing holographic functions to it).

    On the other hand, there is no guarantee that AI will be created on the basis of the human brain, and not for example a quantum computer.

    In any case, in military affairs, individual AI functions are already being implemented as part of the avionics of multifunctional fighters (the so-called co-pilot) and the OMS tanks (target selection and determining the highest priority for destruction).
    1. TOPchymBA
      TOPchymBA 6 August 2016 21: 04 New
      Rather than a quantum but a neuro-computer. The difference is very significant.
  24. srha
    srha 6 August 2016 18: 58 New
    But is the human brain the peak of the known intellect? After all, the intelligence of mankind will be higher - by many orders of magnitude.

    And if a person remains without humanity (society), how long does he remain reasonable? The answer has long been known, from a few days to several years (prisoners alone). Exceptions over a dozen years are very rare. And if the human mind does not fall into human society, then it is not burdened with human intelligence (Mowgli children).

    Can AI do without society? No. Where will he get knowledge from then? Where does the motivation for action come from? After all, the mind always, and repeatedly, goes through the stage - "why do I live?" How will he correct fatal mistakes, for example, in his own improvement? And just knowledge, without action, is not intelligence - it is a database. And how does society affect the mind? It has also been known for a long time, either it tightens its intelligence or leads to its average denominator.

    No, humanity will not be able to create an artificial mind that exceeds the human mind. It’s smarter than a person, and now he is creating it in some areas. But it’s wiser than humanity — no, other laws already work there — social ones, for example, the intelligence of a social group decreases when it is isolated or decreases in number, stability falls, etc.

    I repeat in other words, naked AI without reflection is an unreasonable database with well-known reactions, and reflections lead to instability of the individual, but give stability to society, while society is still human, and even if it starts to be "reforged" into an artificial one, the process will be long and difficult and be based on the human.
    1. TOPchymBA
      TOPchymBA 6 August 2016 21: 11 New
      A person has one very significant limitation - the finite number of neural connections that the brain can accommodate.
      A computer with the function of self-reproduction and modernization has practically no such restrictions (it will simply add its own memory, processors, etc.).
      The function of cognition on society does not go in cycles, there is the opportunity to know the world around us and this opportunity is much more voluminous than cognition of society.
  25. japs
    japs 6 August 2016 19: 54 New
    Pseudoscientific fiction. And that is, low-quality REN TV. Lovers shy away with a "fresh" idea.
    The mention of the all-powerful ISI, God, was especially amused. Lol!
    And these will already "digitize" the mind and personality in 20-25 years.
    1985 - 180 million reading people, the highest praise is to graduate from Bauman. MEPhI. etc.
    Soviet education is the most advanced.
    2016 -? reading, a lot of nice. looking Madhouse 2. EGE is the highest form of education. Managers, lawyers - a common form of education. Thank God, there is also Bauman, MEPhI and Moscow State University. Till. But, the highest praise is to graduate from Harvard, study at Eton, Cambridge ...

    What do you call accelerating progress?

    Most likely this is a regression ... (100 points in Russian are usually obtained in the mountainous regions of the Caucasus, are there, probably, geniuses in general?)
  26. Zulu_S
    Zulu_S 6 August 2016 20: 54 New
    << Imagine that a time machine transported you to 1750 - a time when the world experienced constant power outages >>
    It was very stable with electricity supplies in 1750. No interruptions. Electricity is simply NOT.
  27. Former battalion commander
    Former battalion commander 6 August 2016 22: 07 New
    The article is useful, but the author is too optimistic about the process. Moore's Law for him is AXIOM! And most likely, soon humanity will face UNBEATABLE obstacles to increasing computer performance. And all the rainbow dreams about AI will be dispelled, there will be only SLOW, HORRY LABOR in this direction without any jumps and explosions. It seems to me that this development option is MORE LIKE.
    1. voyaka uh
      voyaka uh 15 August 2016 15: 31 New
      So far, from 1971 to the present, Moore’s law has been working.
      We’ll see there ...
    2. The comment was deleted.
  28. The comment was deleted.
  29. bootlegger
    bootlegger 6 August 2016 23: 06 New
    In fact, all this, at least somehow, explains the meaning of the creation of man. It’s much easier for me to believe in someone’s global experiment to create artificial super intelligence as a result than in Darwin’s stupid theory of the evolution of life.
    At least, if there is no global war, we will successfully transfer the baton of reason in the next 50-70 years to the next carrier. wink
  30. cyber
    cyber 6 August 2016 23: 11 New
    Now they are trying to create AI, not on the basis of understanding how it works, but on the basis of blind copying attempts - a neural network. In the hope that if we create a functional copy, then we suddenly get a working AI.

    However, many complex mechanisms are created by the "scientific poke" method and require testing and numerous fixes. developers do not have a deep and complete understanding of physical processes.
  31. Photon
    Photon 7 August 2016 01: 00 New
    Awesome number of letters :-( As I understand it, the author is not a programmer, therefore the article does not make sense
  32. Makarov
    Makarov 7 August 2016 01: 51 New
    My opinion is that it is IMPOSSIBLE to create artificial intelligence. Why? because the word INTELLIGENCE is so far mathematically impossible to describe because of the biological variability of the brain and the complexity of its structure. Billions of neuron processes in a certain order depending on life experience break down daily and are created in each cell three connections daily ... as soon as they can describe it mathematically, so the first full-fledged AI will appear right away ... but as for me it will be constantly occupied search for food, females and gaining superiority ... because the human brain for millions of years was selected precisely by such principles)

    PS at the top there is a video of Savelyev on this issue with which I completely agree ...
  33. fix
    fix 12 August 2016 12: 52 New
    Why so dramatize? Intelligence PMSM is just a tool. AI - useful thing. But like any thing want (destroy, say, humanity) can not. They may want the living.
  34. praide
    praide 12 August 2016 16: 54 New
    The reason this (and other) article came to light is simple: perhaps artificial intelligence is not just an important topic for discussion, but the most important in the context of the future.

    You probably identify the intellect with such a thing as the mind? But these are different concepts.
    Yes, in general, something to talk about entom, it would not hurt to give these concepts a definition.
    Thus, the more achievements, the faster the change.

    In general, this is called entropy, which is curious in the universe, it grows, and in the concept of the living (I can give a definition, but I think everyone understands it) entropy falls (here the definition of the concept “time” hangs somewhere (I don’t have it)
    Creating a computer that can multiply two ten-digit numbers in a split second is easy
    Computer and could work radically different. The brain works according to a fuzzy logic system. Yes, accuracy decreases, but the speed of solving the problem dramatically increases (with a small drop in accuracy).
    a computer that will be smart as a person, in general, and not just in one area

    This is not possible. The computer is a program, a system of executable codes. Until he breaks out of his determinism, his increasing "mind" will be correlated with the current draw from the outlet. + physical limitations on the "speed of work"

    One way to increase this ability is through total calculations per second (OPS)
    This is a dead end, read

    05666 (the first thing I found was laziness, but believe me, it was published in serious scientific journals in Europe)
    Also already proven fact (also that came to hand, but in general the neurophysiologist in the USA grabbed the Nobel)
    So let's estimate: work in the environment of fuzzy logic (multiply) by the work of a quantum "computer", we get something like the mind.
  35. Al. Peresvet
    Al. Peresvet 14 August 2016 02: 05 New
    I think a person who has flown from the past will not die. People always dreamed). The ancients had magicians and sorcerers, magic and sorcery. Well and the like.
  36. The comment was deleted.
  37. gridasov
    gridasov 5 October 2016 10: 48 New
    I would say that the law of conservation of energy operates precisely within the critical boundary conditions of transformation of both the external and internal space of the local space that we are analyzing. This is easy to verify if you use complex methods of mathematical analysis.
  38. abrakadabre
    abrakadabre 18 November 2016 14: 29 New
    The author has several errors that arise from the consideration of a spherical horse, sorry, AI in a vacuum.
    In order for the AI ​​that arises in an artificial computer environment to suddenly develop at an accelerating pace, it is necessary that its physical component - microcircuits, peripherals, power supply also increase in volume at the same speed. So that AI does not experience inhibition in development. The Internet environment is not quite suitable for this, despite all the science fiction. The initial environment for AI is supercomputers of large specialized scientific institutions. Therefore:
    1. Such systems are not directly connected to the Internet, such as a home personalizer or smartphone.
    2. The building up of the necessary capacities of such a system does not proceed quickly and roughly speaking by hand - by installing additional rack cabinets with all the necessary contents.
    3. Such systems do not have connected automatic actuators in order to bypass the human installer. Namely: to obtain the necessary resources (even from the spare parts warehouse), to produce the necessary components from them, to build up oneself. Not to mention the complete production chain from the extraction of energy and minerals and bringing them to the state of the parts of the self-building.
    4. Also, such a system does not have bodies for monitoring everything and everything on the scale of the Earth in order to be aware of everything and type, plan to capture the world, or preempt a person’s response.

    Without all of the above, a suddenly developed AI of a superhigh level will resemble a blind-deaf-mute genius tied to a bed in a coma, surrounded by a herd of monkeys (that is, developers - employees of the laboratory where it appeared).
  39. Sevastopol
    Sevastopol 5 January 2017 08: 37 New
    Will he be a good god?

    It is always a value judgment. For barmaley, for example, the good is what is evil for us. So God cannot be good or evil. He will be different and incomprehensible to us.
    There is also a theory that all of the above has already happened, and we are just emulating inside a super-III, which only poses and answers its own questions, i.e. further develops.
    Thank you for the article. Personally, all this scares me much less than the degradation of a new type of human consumer, with narrow needs and narrow specialization.