Military Review

Artificial Intelligence. Part two: extinction or immortality?

Artificial Intelligence. Part two: extinction or immortality?

Here is the second part of an article from the series “Wait, how can this all be a reality, why is it still not spoken about at every corner.” In the previous series it became known that an explosion of intellect is gradually creeping towards the people of the planet Earth, it is trying to develop from narrowly focused to universal human intelligence and, finally, artificial superintelligence.

"Perhaps, we face an extremely complex problem, and it is not known how much time is allotted for its solution, but the future of humanity may depend on its solution." - Nick Bostrom.

The first part of the article began innocently enough. We discussed the narrowly focused artificial intelligence (AII, which specializes in solving one specific task, such as determining routes or playing chess), in our world it is a lot of it. Then they analyzed why it is so difficult to grow generalized artificial intelligence from UII (AOI, or AI, which, according to intellectual abilities, can be compared to a person in solving any task). We came to the conclusion that the exponential rates of technological progress hint that OII may appear rather soon. In the end, we decided that as soon as the machines reached the human level of intelligence, the following could happen immediately:

As usual, we look at the screen, not believing that artificial super-intelligence (ICI, which is much smarter than any person) can appear already in our lives, and selecting the emotions that would best reflect our opinion on this issue.

Before we delve into the particular ICI, let's remind ourselves what it means for a machine to be super-intelligent.

The main difference lies between fast superintelligence and high-quality superintelligence. Often the first thing that comes to mind at the thought of a super-intelligent computer is that he can think much faster than a person — millions of times faster, and in five minutes will comprehend what a person would need for ten years. (“I know Kung Fu!”)

It sounds impressive, and the ISI really should think faster than any of the people - but the main separating feature will be in the quality of his intellect, and this is quite another. People are much smarter than monkeys, not because they think faster, but because people's brains contain a number of ingenious cognitive modules that make complex linguistic representations, long-term planning, abstract thinking, which monkeys are not capable of. If you disperse a monkey's brain a thousand times, it will not become smarter than us - even after ten years it will not be able to assemble a designer according to the instructions, what a person would need a couple of hours at most. There are things that a monkey never learns, no matter how many hours it spends or how fast its brain works.

In addition, the monkey does not know how humanly, because its brain is simply not able to realize the existence of other worlds - the monkey may know what a man is and what a skyscraper is, but he will never understand that the skyscraper was built by people. In her world, everything belongs to nature, and the macaque not only cannot build a skyscraper, but also understand that anyone can build it at all. And this is the result of a small difference in the quality of intelligence.

In the general scheme of the intellect we are talking about, or simply by the standards of biological creatures, the difference in the quality of the intelligence of man and a monkey is tiny. In the previous article we placed biological cognitive abilities on the ladder:

To understand how serious a super intelligent machine will be, place it two steps higher than the person on this ladder. This machine may be supramental quite a bit, but its superiority over our cognitive abilities will be the same as ours - over monkeys. And as the chimpanzees never comprehend that a skyscraper can be built, we may never understand what the machine will understand a couple of steps higher, even if the machine tries to explain it to us. But this is just a couple of steps. The car will see ants smarter in us - it will teach us the simplest things from its position for years, and these attempts will be completely hopeless.

The type of superintelligence that we will talk about today lies far beyond this staircase. This is an explosion of intelligence - when the smarter the machine becomes, the faster it can increase its own intelligence, gradually increasing momentum. Such a machine may take years to surpass the chimpanzees in intelligence, but perhaps a couple of hours to surpass us by a couple of steps. From this point on, the machine can already jump over four steps every second. That is why we should understand that very soon after the first news that the machine has reached the level of human intelligence, we can face the reality of coexistence on Earth with something that will be much higher than us on this ladder (and maybe millions of times higher):

And since we have already established that it is absolutely useless to try to understand the power of a machine that is only two steps higher than us, let us define once and for all that there is no way to understand what ICI will do and what the consequences will be for us. Anyone who claims the opposite simply does not understand what super-intelligence means.

Evolution has slowly and gradually developed the biological brain for hundreds of millions of years, and if humans create a machine with superintelligence, in a sense we will surpass evolution. Or it will be a part of evolution — perhaps evolution is such that intelligence develops gradually until it reaches a turning point that heralds a new future for all living beings:

For reasons that we will discuss later, a huge part of the scientific community believes that the question is not whether we get to this turning point, but when.

Where are we after this?

I think no one in this world, neither I, nor you, can say what happens when we reach a turning point. Oxford philosopher and leading theorist AI Nick Bostrom believes that we can reduce all possible results to two large categories.

First, looking at historywe know the following about life: species appear, exist for a certain time, and then inevitably fall from the balance of life balance and die out.

“All species are dying out” was as reliable a rule in history as “all people someday die”. 99,9% of species have fallen from a life log, and it is clear that if a certain species stays on this log too long, a gust of natural wind or a sudden asteroid will turn this log upside down. Bostrom calls extinction the state of an attractor - a place where all species balance, so as not to fall where no species has returned from yet.

And although most scientists recognize that ISI will have the ability to doom people to extinction, many also believe that using the capabilities of the ISI will allow individuals (and the species as a whole) to achieve the second state of the attractor - species immortality. Bostrom believes that the immortality of the species is the same attractor as the extinction of the species, that is, if we get to this, we will be doomed to eternal existence. Thus, even if all species had fallen from this stick into the pool of extinction before the current day, Bostrom believes that the log has two sides, and there is simply no such intelligence on Earth that will understand how to fall to the other side.

If Bostrom and others are right, and, judging by all the information available to us, they may very well be, we need to take two very shocking facts:

The appearance of ISI for the first time in history will open up the possibility for the species to reach immortality and fall out of the fatal cycle of extinction.
The emergence of ICI will have such an unimaginably enormous impact that it is likely to push humanity from this log in one direction or the other.
It is possible that when evolution reaches such a turning point, it always puts an end to the relationship of people with the flow of life and creates a new world, with people or without.

This leads to one interesting question that only a lazy person would not ask: when will we get to this turning point and where will he determine us? No one in the world knows the answer to this double question, but many smart people have been trying to understand this for decades. The rest of the article we will find out what they came to.

* * *

We begin with the first part of this question: when should we reach a turning point? In other words: how much time is left until the first machine reaches the superintelligence?

Opinions vary from case to case. Many, including Professor Vernor Vinge, scientist Ben Herzl, co-founder of Sun Microsystems Bill Joy, futurologist Ray Kurzweil, agreed with machine learning expert Jeremy Howard when he presented the following chart on TED Talk:

These people share the opinion that the ISI will appear soon - this exponential growth, which seems slow to us today, will literally explode in the next few decades.

Others like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, computer expert Ernest Davis and technopreneur Mitch Kapor believe that thinkers like Kurzweil seriously underestimate the scale of the problem, and think that we are not so close to a turning point.

Kurzweil Camp objects that the only underestimation that takes place is ignoring exponential growth, and you can compare doubters with those who looked at the slowly burgeoning Internet in 1985 and claimed that it would not affect the world in the near future.

"Doubters" can fend off, saying that progress is more difficult to do each subsequent step when it comes to the exponential development of the intellect, which levels the typical exponential nature of technological progress. And so on.

The third camp, in which Nick Bostrom is located, disagrees neither with the first nor with the second, arguing that a) all this can absolutely happen in the near future; and b) there are no guarantees that this will happen at all or will take longer.

Others, like the philosopher Hubert Dreyfus, believe that all these three groups naively believe that there will be a turning point in general, and also that, most likely, we will never get to ISI.

What happens when we put all these opinions together?

In 2013, Bostrom conducted a survey in which he interviewed hundreds of experts in the field of artificial intelligence during a series of conferences on the following topic: “What is your prediction for achieving a human-level OIH?” And asked to name an optimistic year (in which we will have OII with 10 - percent chance), a realistic assumption (the year in which we have the 50-percent probability of OII) and a confident assumption (the earliest year in which the OII appears with the 90-percent probability). Here are the results:

* Optimistic average year (10%): 2022
* Average realistic year (50%): 2040
* Mean Pessimistic Year (90%): 2075

The average respondents believe that in 25 years we will have more AIS than not. The 90-percent probability of occurrence of OII by 2075 means that if you are still quite young now, this will most likely happen in your lifetime.

A separate study recently conducted by James Barrat (the author of the acclaimed and very good book, Our Last Invention, "excerpts from which I brought to the attention of readers and Ben Herzel at the annual conference devoted to OII, AGI Conference, simply showed people's opinions regarding the year in which we get to OII: to 2030, 2050, 2100, later or never. Here are the results:

* 2030: 42% of respondents
* 2050: 25%
* 2100: 20%
After 2100: 10%
Never: 2%
It looks like the results of Bostrom. In the Barrat survey, more than two thirds of the respondents believe that the OII will be here by the 2050 year, and less than half believe that the OII will appear in the next 15 years. It is also striking that only 2% of respondents, in principle, do not see AIS in our future.

But CSI is not a turning point, like ICI. When, according to experts, we will have ICI?

Bostrom interviewed experts when we reach the ISI: a) two years after reaching the AIS (that is, almost instantly due to an explosion of intelligence); b) in 30 years. Results?

The average view is that the fast transition from the OII to the ISI with the 10-percent probability, but in 30 years or less it will happen with the 75-percent probability.

From this data, we do not know what date respondents would call the 50 percent chance of occurrence of ICI, but based on the two answers above, let's assume that it is 20 years. That is, the world's leading experts in the field of AI believe that the turning point will come in the 2060 year (the OII will appear in the 2040 year + it will take years for the 20 to transition from the OII to the IIS).

Of course, all of the above statistics are speculative and simply represent the opinion of experts in the field of artificial intelligence, but they also indicate that most of the people concerned agree that by the 2060, the ISI is likely to come. In just a few years 45.

We turn to the second question. When we reach a turning point, which side of the fatal choice will determine us?

Superintelligence will be powerful, and the critical question for us will be the following:

Who or what will control this force and what will be its motivation?

The answer to this question will depend on whether the ISI receives an incredibly powerful development, an immeasurably terrifying development, or something between these two options.

Of course, the expert community is trying to answer these questions. The Bostroma survey analyzed the likelihood of the possible consequences of the impact of OII on humanity, and it turned out that with the 52-percent chance everything goes very well and with the 31-percent chance everything goes either bad or extremely bad. The survey attached at the end of the previous part of this topic, conducted among you, dear readers of Hi-News, showed about the same results. For a relatively neutral outcome, the probability was only 17%. In other words, we all believe that the appearance of OII will be a great event. It is also worth noting that this survey concerns the appearance of OII - in the case of ICI, the percentage of neutrality will be lower.

Before we dive further into the discourse on the bad and good sides of the question, let's combine both parts of the question - “when will this happen?” And “good or bad?” Into a table that covers the views of most experts.

We will talk about the main camp in a minute, but first decide on your position. Most likely, you are in the same place as I, before you began to deal with this topic. There are several reasons why people generally do not think about this topic:

* As mentioned in the first part, the films seriously confused people and facts, presenting unrealistic scenarios with artificial intelligence, which led to the fact that we should not take AI at all seriously. James Barratt compared this situation with the fact that the Centers for Disease Control issued a serious warning about vampires in our future.

* Because of so-called cognitive biases, it is very difficult for us to believe in the reality of something, as long as we have no evidence. You can confidently imagine the 1988 computer scientists of the year, who regularly discussed the far-reaching consequences of the appearance of the Internet and what it could become, but people hardly believed that it would change their lives until this actually happened. Computers simply did not know how to do this in 1988, and people just looked at their computers and thought, “Seriously? Is this what the world will change? ” Their imagination was limited by what their personal experience taught them, they knew what a computer was, and it was difficult to imagine what the computer would become capable of in the future. The same thing is happening now with AI. We heard that it will become a serious thing, but since we have not yet confronted him face to face and, on the whole, we see rather weak manifestations of AI in our modern world, it is rather difficult for us to believe that he will radically change our life. It is against these prejudices that numerous experts from all camps, as well as interested people, are trying to get our attention through the noise of everyday collective egocentrism.

* Even if we believed all this - how many times have you thought about the fact that you will spend the rest of eternity in non-existence? A little, agree. Even if this fact is much more important than what you do every day. This is because our brains are usually focused on small everyday things, no matter how crazy the long-term situation we are in. Simply, we are so arranged.
One of the goals of this article is to take you out of the camp called “I like to think about other things” and put experts in the camp, even if you just stand at the crossroads between the two dotted lines on the square above, being completely undecided.

In the course of the research it becomes obvious that the opinions of most people quickly go in the direction of the “main camp”, and three-quarters of the experts fall into two subcamps in the main camp.

We will fully visit both of these camps. Let's start with fun.

Why the future may be our greatest dream?

As we explore the world of AI, we discover surprisingly many people in the comfort zone. People in the upper right square are buzzing with excitement. They believe that we will fall on the good side of the log, and are also confident that we will inevitably come to this. For them, the future is only the best that one can dream of.

The point that distinguishes these people from other thinkers is not that they want to be on the happy side - but that they are sure that it is she who is waiting for us.

This confidence comes out of controversy. Critics believe that it comes from blinding excitement, which overshadows the potential negative aspects. But supporters say that gloomy forecasts are always naive; technologies continue and will always help us more than harm.

You have the right to choose any of these opinions, but put aside skepticism and take a good look at the happy side of the balance log, trying to accept the fact that everything you read may have already happened. If you showed hunter-gatherers our world of comfort, technology and infinite abundance, they would have seemed like a magical fiction to them - and we behave quite modestly, unable to admit that the same incomprehensible transformation awaits us in the future.

Nick Bostrom describes three ways in which the supramental system of artificial intelligence can go:

* An oracle that can answer any precisely posed question, including complex questions that people cannot answer - for example, “how to make a car engine more efficient?”. Google is a primitive oracle.

* The genie who will execute any high-level command — uses molecular assembler to create a new, more efficient version of the car engine — and will wait for the next command.

* A sovereign who will have wide access and the ability to function freely in the world, making his own decisions and improving the process. He will invent a cheaper, faster and safer way to travel privately than a car.

These questions and tasks, which seem difficult for us, will seem like a supermind system if someone asked to improve the situation “my pencil dropped from the table”, in which you would simply lift it and put it back.

Eliezer Yudkovsky, an American expert in artificial intelligence, well noticed:

“Difficult problems do not exist, only problems that are difficult for a certain level of intelligence. Go up a notch (in terms of intelligence), and some problems suddenly go from the category of "impossible" to the camp of "obvious." One step higher - and they will all become obvious. ”

There are many impatient scientists, inventors and entrepreneurs who have chosen a zone of confident comfort on our table, but for a walk to the best in this best of worlds, we need only one guide.

Ray Kurzweil causes twofold sensations. Some idolize his ideas, some despise him. Some stay in the middle - Douglas Hofstadter, discussing the ideas of the books of Kurzweil, eloquently noted that "it is as if you took a lot of good food and a little dog poop, and then mixed everything so that it is impossible to understand what is good and what is bad."

Whether you like his ideas or not, it is impossible to pass by them without a shadow of interest. He began to invent things as a teenager, and in the following years he invented several important things, including the first flatbed scanner, the first text-to-speech scanner, the well-known Kurzweil musical synthesizer (the first real electric piano), and the first commercially successful speech recognizer. He is also the author of five acclaimed books. Kurzweil is appreciated for his bold predictions, and his “track record” is quite good - at the end of 80, when the Internet was still in its infancy, he suggested that by 2000 years the Network would become a global phenomenon. The Wall Street Journal called Kurzweil a “restless genius,” Forbes, a “global thinking machine,” Inc. Magazine is “the legitimate heir of Edison,” Bill Gates is “the best of those who predict the future of artificial intelligence.” In 2012, Google’s co-founder Larry Page invited Kurzweil to the post of technical director. In 2011, he co-founded Singularity University, which sheltered NASA and which is partly sponsored by Google.

His biography matters. When Kurzweil talks about his vision of the future, it looks like crazy crazy, but the really crazy thing about this is that he is far from crazy - he is an incredibly intelligent, educated and sensible person. You may think that he is mistaken in forecasts, but he is not a fool. Kurzweil's forecasts are shared by many experts in the “comfort zone,” Peter Diamandis and Ben Herzel. That is what will happen in his opinion.


Kurzweil believes that computers will reach the level of general artificial intelligence (OII) by 2029, and by 2045 we will not only have an artificial superintelligence, but also a completely new world - the time of the so-called singularity. His chronology of AI is still considered outrageously exaggerated, but over the last 15 years, the rapid development of systems of narrowly focused artificial intelligence (AII) has led many experts to switch to Kurzweil. His predictions still remain more ambitious than in the Bostrom survey (OII to 2040, IIS to 2060), but not by much.

According to Kurzweil, the singularity of 2045 of the year leads to three simultaneous revolutions in the fields of biotechnology, nanotechnology and, more importantly, AI. But before we continue - and nanotechnologies are continuously following artificial intelligence, let's take a minute to nanotechnologies.

A few words about nanotechnology

We usually call nanotechnology technology that deals with the manipulation of matter within 1-100 nanometers. A nanometer is one billionth of a meter, or a millionth of a millimeter; within 1-100 nanometers, you can fit viruses (100 nm across), DNA (10 nm wide), hemoglobin molecules (5 nm), glucose (1 nm), and more. If nanotechnologies ever become subservient to us, the next step will be manipulations with individual atoms that are the least one order of magnitude (~, 1 nm).

To understand where people encounter problems, trying to control matter on such a scale, let's move on to a larger scale. The International Space Station is located 481 a kilometer above the Earth. If people were giants and touched the ISS with their heads, they would be 250 000 times more than they are now. If you increase something from 1 to 100 nanometers in 250 000 times, you will get an 2,5 centimeter. Nanotechnology is the equivalent of a person with an ISS high orbit trying to control things the size of a grain of sand or eyeball. To get to the next level — control of individual atoms — the giant will have to carefully position objects with a diameter of 1 / 40 of a millimeter. Ordinary people will need a microscope to see them.

Richard Feynman spoke about nanotechnology for the first time in 1959. Then he said: “The principles of physics, as far as I can tell, do not speak against the possibility of controlling things atom by atom. In principle, a physicist could synthesize any chemical substance recorded by a chemist. How? By placing the atoms where the chemist says to get the substance. ” This is all simplicity. If you know how to move individual molecules or atoms, you can do almost everything.

Nanotechnologies became a serious scientific field in 1986, when engineer Eric Drexler presented their fundamentals in his fundamental book “Machines of Creation”, however Drexler himself believes that those who want to learn more about modern ideas in nanotechnology should read his book 2013 of the year “ Total abundance ”(Radical Abundance).

A few words about the "gray goo"
Deepen in nanotechnology. In particular, the theme of "gray goo" is one of the not the most pleasant topics in the field of nanotechnology, which cannot be said about. In the old versions of the theory of nanotechnology, a nanoscale method was proposed, involving the creation of trillions of tiny nanorobots that would work together to create something. One of the ways to create trillions of nanorobots is to create one that can reproduce itself, that is, from one — two, from two — four, and so on. During the day several trillions of nanorobots will appear. Such is the power of exponential growth. Funny isn't it?

It's funny, but exactly until it leads to an apocalypse. The problem is that the power of exponential growth, which makes it quite convenient to quickly create a trillion nanobots, makes self-replication a terrible thing in perspective. What if the system shuts down, and instead of stopping replication on a couple trillion, nanobots will continue to multiply? What if this whole process is carbon dependent? Earth's biomass contains carbon atoms 10 ^ 45. The nanobot must consist of the order of 10 ^ 6 carbon atoms, so 10 ^ 39 nanobots will devour all life on Earth, and this will happen in just 130 replications. An ocean of nanobots ("gray goo") will flood the planet. Scientists think that nanobots can replicate in 100 seconds, which means that a simple mistake can kill all life on Earth in just 3,5 hours.

It may be worse - if nanotechnologies are reached by the hands of terrorists and unfavorable specialists. They could create several trillions of nanobots and program them to spread silently around the world in a couple of weeks. Then, one click of a button, in just 90 minutes they will eat everything at all, no chance.

Although this horror story has been widely discussed for many years, the good news is that this is just a horror story. Eric Drexler, who coined the term “gray goo,” recently said the following: “People love horror stories, and this one is in the category of scary stories about zombies. This idea itself is already eating brains. ”

After we get to the bottom of nanotechnology, we can use them to create technical devices, clothing, food, bioproducts - blood cells, fighters against viruses and cancer, muscle tissue, etc. - anything. And in the world that uses nanotechnology, the cost of the material will no longer be tied to its shortage or complexity of the manufacturing process, but rather to the complexity of the atomic structure. In the world of nanotechnology, a diamond can become a cheaper eraser.

We are not close yet. And it is not entirely clear, we underestimate or overestimate the complexity of this path. However, everything goes to the fact that nanotechnology is not far off. Kurzweil suggests that by the 2020 years we will have them. The world states know that nanotechnologies can promise a great future, and therefore invest many billions in them.

Just imagine what possibilities a super-intelligent computer will get if it gets to a reliable nanoscale assembler. But nanotechnology is our idea, and we are trying to ride it, it's difficult for us. What if for the ISI system they are just a joke, and the ISI itself will come up with technologies that will be at times more powerful than anything that we generally can assume? We agreed: no one can assume what artificial artificial intelligence will be capable of? It is believed that our brains are unable to predict even the minimum of what will be.

What could AI do for us?

Armed with super-intelligence and all the technologies that super-intelligence could create, ICI will probably be able to solve all the problems of humanity. Global warming? ISI will first stop carbon emissions by inventing a host of efficient ways to produce energy that is not associated with fossil fuels. Then he will come up with an effective, innovative way to remove excess CO2 from the atmosphere. Cancer and other diseases? Not a problem - health care and medicine will change in a way that is impossible to imagine. World hunger? ICI will use nanotechnology to create meat that is identical to natural, from scratch, real meat.

Nanotechnologies will be able to turn a pile of garbage into a vat of fresh meat or other food (not necessarily even in a familiar form — imagine a giant apple cube) and spread all this food around the world using advanced transportation systems. Of course, it will be great for animals that no longer have to die for food. ICI can also do a lot of other things like saving endangered species or even returning already extinct from stored DNA. The CII can solve our most difficult macroeconomic problems — our most difficult economic debate, ethics and philosophy, world trade — all this will be painfully obvious to CII.

But there is something special that CII could do for us. Alluring and teasing that would change everything: CII can help us deal with mortality. Gradually grasping the possibilities of AI, you may also reconsider all your ideas about death.

Evolution had no reason to extend our lifespan longer than it is now. If we live long enough to give birth and raise children to the point where they can stand up for themselves, this evolution is enough. From an evolutionary point of view, 30 + has enough years to develop, and there is no reason for mutations that prolong life and reduce the value of natural selection. William Butler Yates called our species "soul attached to a dying animal." Not very fun.

And since we all die someday, we live with the thought that death is inevitable. We think about aging with time - continuing to move forward and not being able to stop this process. But the thought of death is treacherous: captured by it, we forget to live. Richard Feynman wrote:

“There is a wonderful thing in biology: there is nothing in this science that would speak about the need for death. If we want to create a perpetual motion machine, we understand that we have found enough laws in physics, which either indicate the impossibility of this, or that the laws are wrong. But in biology there is nothing that would indicate the inevitability of death. This leads me to believe that it is not so inevitable, and it remains only a matter of time before biologists find the cause of this problem, this terrible universal disease, it will be cured. ”

The fact is that aging has nothing to do with time. Aging is that the physical materials of the body wear out. Parts of the car also degrade - but is this aging inevitable? If you repair the car as parts wear, it will work forever. The human body is no different - just more complicated.

Kurzweil talks about intelligent, Wi-Fi-connected nanobots in the bloodstream that could perform countless human health tasks, including regular repair or replacement of worn cells in any part of the body. If you improve this process (or find an alternative proposed by a more intelligent ICI), it will not only keep your body healthy, it can reverse aging. The difference between the body of the 60-year-old and the 30-year-old lies in a handful of physical moments that could be corrected with the right technology. ISI could build a car that a person would visit as a 60-year old, and get out an 30-year old.

Even a degrading brain could be updated. ISI would surely know how to do this without affecting the brain data (personality, memories, etc.). The 90-year-old, suffering from complete brain degradation, could retrain, upgrade, and return to the beginning of his life career. This may seem absurd, but the body is a handful of atoms, and the CID probably could easily manipulate them with any atomic structures. All is not so absurd.

Kurzweil also believes that artificial materials will be integrated into the body more and more as time moves. To begin with, the organs could be replaced by super-advanced machine versions that would work forever and never fail. Then we could make a complete body redesign, replace red blood cells with ideal nanobots that would move on their own, eliminating the need for a heart in general. We could also improve our cognitive abilities, start thinking billions faster and gain access to all the information available to humanity using the cloud.

The possibilities for comprehending new horizons would be truly limitless. People have managed to endow sex with a new appointment; they are engaged in it for pleasure, and not just for reproduction. Kurzweil thinks we can do the same with food. Nanobots could deliver perfect nutrition directly into the cells of the body, allowing unhealthy substances to pass through the body. Nanotechnology theorist Robert Freitas has already developed a replacement for blood cells, which, when implemented in a human body, may allow him not to breathe for 15 minutes - and this was invented by man. Imagine when the power gets ICI.

After all, Kurzweil believes that people will reach a point when they become completely artificial; the time when we will look at biological materials and think about how primitive they were; the time when we will read about the early stages of human history, marveling at how microbes, accidents, illnesses or just old age could kill a person against his will. In the end, people will defeat their own biology and become eternal - this is the way to the happy side of the balance beam, which we are talking about from the very beginning. And the people who believe in it are also sure that this future awaits us very, very soon.

You certainly will not be surprised that the ideas of Kurzweil drew severe criticism. His singularity in the 2045 year and the subsequent eternal life for people were called the “Ascension of Nerds” or “Reasonable Creation of People with IQ 140” Others questioned the optimistic timeframe, the understanding of the human body and brain, reminded him of Moore's law, which does not go anywhere yet. For every expert who believes in the ideas of Kurzweil, there are three who believe that he is mistaken.

But the most interesting thing about this is that the majority of experts who disagree with him do not generally say that this is impossible. Instead of saying “nonsense, this will never happen,” they say something like “it all happens if we get to ISI, but that’s just the catch.” Bostrom, one of the recognized AI experts warning about the dangers of AI, also recognizes:

“It is unlikely that there will be at least some problem that the superintelligence cannot solve or at least help us solve. Diseases, poverty, environmental destruction, the suffering of all kinds - all this super-intelligence with the help of nanotechnology will be able to solve in a moment. Also, superintelligence can give us unlimited lifespan by stopping and reversing the aging process, using nanomedicine or the ability to load us into the cloud. Superintelligence can also create opportunities for an infinite increase in intellectual and emotional possibilities; he can assist us in creating a world in which we will live in joy and understanding, approaching our ideals and regularly fulfilling our dreams. ”

This is a quote from one of Kurzweil’s critics, however, recognizing that all this is possible if we succeed in creating a secure ISI. Kurzweil simply determined what artificial intelligence should be, if it ever becomes possible. And if he is a good god.

The most obvious criticism of the supporters of the “comfort zone” is that they can be damn wrong when assessing the future of the ICI. In his book, Singularity, Kurzweil devoted 20 pages from 700 to potential ISI threats. The question is not when we get to the ISI, the question is what will be his motivation. Kurzweil responds to this question with caution: “The ISI arises from many disparate efforts and will be deeply integrated into the infrastructure of our civilization. In fact, it will be closely integrated into our body and brain. It will reflect our values, because it will be one with us. ”

But if the answer is, why are so many smart people in this world worried about the future of artificial intelligence? Why does Stephen Hawking say that the development of ICI "can mean the end of the human race"? Bill Gates says that he "does not understand people who are not concerned" by this. Elon Musk fears that we "call on the demon." Why do many experts consider ISI the biggest threat to humanity?

This we will talk about next time.

Based on, compiled by Tim Urban. The article uses materials from Nick Bostrom, James Barrat, Ray Kurzweil, Jay Niels-Nilsson, Stephen Pinker, Vernor Vinge, Moshe Vardy, Russ Roberts, Stuart Armstroh and Kai Sotal, Susan Schneider, Stuart Russell and Peter Norwig Tete, Tete, Tete, Armstrong, Tee Sthal, Schneider, Stewart Russell, Peter Norwig, Tete, Tete, Armstrong Marcus, Carl Schulman, John Searle, Jaron Lanier, Bill Joy, Kevin Keli, Paul Allen, Stephen Hawking, Kurt Andersen, Mitch Kapor, Ben Herzel, Arthur Clarke, Hubert Dreyfus, Ted Greenwald, Jeremy Howard.

Subscribe to our Telegram channel, daily additional materials that do not get on the site:

Dear reader, to leave comments on the publication, you must sign in.
  1. demiurg
    demiurg 13 August 2016 05: 37
    I suggest that the author in the next story talk about the god emperor from Warhammer 40000.
    And that, neither examples of modern experiments, nor the current state of development are indicated. Everybody loves to dream.
    Believe me, this is a very interesting topic, especially if we draw analogies between Russia and the Imperium, the dark Eldar and the West.
    And the drawings can be found on the Web much more interesting.
    1. Alyer
      Alyer 14 August 2016 23: 28
      The article is philosophical, therefore it is WRONG to be critical at the level of your perception. By the way, many real philosophical laws are excluded by mathematical formulas and physical definitions. Nevertheless, they, philosophical laws, exist and work. The fact is that mathematics allows reductions, and physics exceptions
  2. Armored optimist
    Armored optimist 13 August 2016 05: 59
    Artificial intelligence will give immortality to the rich, who will be served by machines and a limited number of living servants. But how boring their life will be!
    1. kalibr
      kalibr 13 August 2016 07: 00
      Once the "rich" get immortality, they will no longer need wealth. Especially if a "nano-collector" is created. The problem is how much the primary "nanoisation" of a person will cost. If it's very expensive, then, yes, only the richest will become supermen, and everyone else will hate them. If it is not very expensive, there will be a problem of selection, to whom to give immortality, to whom not. What is the merit if there is not enough money? Finally, if the price is equal to the flu vaccine, then ... the problem of selection will again arise (what to do with fools, if, say, they cannot be corrected by nanotechnology methods? But if they fix them and become smart and beautiful, the problem will remain. who basically do not want to be cyborgs. And they will still need factories, fields, they will also pollute nature and they will hate supermen. In theory, they should be destroyed. But this is not humane. What to do?
      1. Proxima
        Proxima 13 August 2016 11: 36
        The philosophical understanding of the topic of "immortality" was analyzed in detail by Homer. It is enough to remember how the gods regretted that they were mortal. They could not appreciate the past, present, future, because it is eternal. Therefore, they were drawn to people who appreciated the moment, appreciated life. Let's remember Odysseus, who was offered eternal youth. an eternally young and beautiful female goddess, but he chose family, mortal life, and an aging wife. In fairness, I must say that the choice for him was oh, how difficult. It was determined for 7 years !!! So, dear forum users, eternally living people, like the immortal gods of Olympus, will envy us mortals.
  3. surrozh
    surrozh 13 August 2016 06: 14
    I join, there is no specifics. At VO, the specifics are different, philosophy is not welcome.
  4. Razvedka_Boem
    Razvedka_Boem 13 August 2016 06: 44
    And if he will be a good god.

    When AI appears, it will become a god. You don’t think you can control it? ..)
    Eric Drexler introduced their basics in his fundamental book Machines of Creation

    I read this book, the perspectives described there are amazing.
    1. kalibr
      kalibr 13 August 2016 07: 05
      I gave the assignment to students to come up with a new religion for the near future. And they studied it all and decided that it would be ... "machinism". Great machine that answers any question! "Machinists" are her priests, "typists" are "lubricants". And so on. The emblem is a gear, a cross is inscribed in it, at the base of which there are 4 crescents. -It's funny!
      1. Razvedka_Boem
        Razvedka_Boem 13 August 2016 09: 01
        And since we have already established that it is absolutely useless to try to understand the power of a machine that is only two steps higher than us, let us define once and for all that there is no way to understand what ICI will do and what the consequences will be for us. Anyone who claims the opposite simply does not understand what super-intelligence means.

        I think your students drew a fairly plausible picture, but AI will be self-sufficient, it will not need people to worship, rather, it uses them for its own purposes. As it develops, I think it will go to the ocean, it will turn out something like Solaris ..)
        S. Lem was also impressed by the story, which said that the information has a critical mass. And when it reaches it, then it will self-destruct.
    2. de_monSher
      de_monSher 8 November 2016 19: 13
      When AI appears, it will become a god.

      If we discard the religious view, the concept of God (meaning the one) is a strictly formalized concept. This concept has a set of its characteristics. Well, the AI ​​system will not be able to answer even half of these characteristics in any way - so it cannot be God by definition ... *)
  5. cumastra1
    cumastra1 13 August 2016 07: 26
    Immortality. The transfer of consciousness into a carrier - a machine - no matter what kind - autonomous, or something like a server with billions of consciousnesses - a kind of death. In the worst case - slavery, the fate of a toy. An ant farm of sorts. Renew the body - create elves? So what will happen on earth in a thousand years? A branch of a barrel with a herring and not the fact that a peaceful herring. And most likely - elves play "farm", or "total var" some sort. "Immortals". So it's all the same "We're all going to die."
  6. xorgi
    xorgi 13 August 2016 09: 14
    Ode to nonsense! What is superintelligence? Intelligence is something close to human thinking. There are NO such systems. There are only imitations. What about superintelligence? Why is it believed that faster it means over? Faster search of options, even by a unique algorithm, is not intelligence, but ALL modern quasi-intelligent systems work just like that: algorithm and speed.
    1. The comment was deleted.
    2. bastard
      bastard 13 August 2016 12: 31
      Quote: xorgi
      Ode to nonsense! What is superintelligence? Intelligence is something close to human thinking. There are NO such systems. There are only imitations.

      Nice to read your comment. +
      Quote: xorgi
      Why is it believed that faster it means over? Faster search of options, even with a unique algorithm, it’s not intelligence,

      That's right, it's combinatorics. Until AI, as far as Pluto at the speed of a pedestrian.
      I’ll publish this video again, apparently no one bothered to see the likes, you see, the ratings. . .
      Says Dr. Biological Sciences, prof. S.V Savelyev, embryologist, neuromorphologist. It’s a pity that priests and charlatans appear on TV more often than scientists.
      1. xorgi
        xorgi 13 August 2016 13: 52
        Chess is not an intellectual game! Look at Kasparov! - Yes, for this phrase alone you can give a medal!
    3. gladcu2
      gladcu2 13 August 2016 18: 05

      You have not read the article carefully. About what you wrote there meticulously disassembled.
  7. Vadim237
    Vadim237 13 August 2016 09: 21
    We already have progress in the ISI
    Computer immunity created

    In Tomsk, they developed artificial intelligence that can independently detect malicious software without the help of anti-virus systems.

    Adaptive immunity of the operating system (AIOS) is also able to determine the authorship of the virus by analyzing the program code. Scientists intend to prove that antivirus companies themselves write viruses that threaten our computers.

    According to experts, most users spend from four to six thousand rubles annually to protect their computers. Moreover, new viruses that threaten the "health" of the PC appear almost every month. They are constantly evolving, can self-learn and change their behavior.

    “Our survey among users showed that 65% of them do not trust existing antiviruses because of the lack of transparency in their work,” Evgeny Garin, head of the TUSUR intellectual property department, told RG. - For example, antiviruses have access to absolutely all sections on drives and RAM, that is, including our personal data.

    Popular antiviruses do not have malware detection features. They can only find the virus that is already known and listed in the library. Therefore, they need regular updates. Meanwhile, malware that is not yet in these databases remains undetected.

    The main principle of AIOS is to identify elements of artificial intelligence in programs.

    “The algorithms developed in TUSUR allow us to detect in the program code the ability of the virus body to copy itself and other signs of a self-regulating system similar to a living organism,” the press service of innovative organizations of the Tomsk Region reports.

    “Our program can be compared to a real immune system,” explains Evgeny Garin. - Standard antivirus "goes for pills" to the store of its manufacturer. If there is no cure for this particular infection, the computer will die, because it does not have its own "immunity". AIOS checks program code for signs of malware, acting as an immune defense.

    According to the developers, in the future, artificial immunity will recognize 100% of viruses. Using the system, scientists are going to compile a library of an individual semantic trace of programmers who write malicious code. The virus hunter will not only study their behavior and isolate, but also look for the author of these programs and report on his activities to law enforcement agencies.

    “Our main task is to stop the spelling of viruses by the antivirus companies themselves in order to actualize the demand for their software,” says Evgeny Garin. - It is possible that between manufacturers of antiviruses and manufacturers of operating systems there is a certain conspiracy. Therefore, we plan to integrate our adaptive immunity into domestic operating systems as part of the import substitution program.

    Scientists intend to team up with manufacturers of domestic operating systems and release computers already with built-in immunity. This will protect end consumers not only from viruses but also from unscrupulous manufacturers of antivirus software. At the same time, users will not need to spend money annually on updating the system.

    The developers believe that the introduction of adaptive immunity will allow our operating systems to compete with such market leaders as Windows, Linux, Android and iOS.

    The AIOS project has already attracted several potential investors. In the near future, representatives of one of the companies will come to Siberia for a more detailed acquaintance with the project.
    1. region58
      region58 13 August 2016 11: 45
      Quote: Vadim237
      Computer immunity created

      Sorry ... Associations ...
      It was already ... laughing At one time, the well-known student Babushkin wrote the "Immunity" antivirus. True, real antiviruses were defined as a virus. What is most piquant is that the author received a not sickly grant, plus his dad (not the last person at a medical university) "implemented" this antivirus on work machines and, accordingly, paid for technical support from the budget. Detailed analysis was at and Habrahabr. The guys laughed for a long time over the technology "E * and geese" ...
    2. gladcu2
      gladcu2 13 August 2016 18: 13
      That is, they simply changed the concept of antivirus software. If the former scan the computer for the presence of library "markers", then the Tomsk software should scan for malicious code. Probably comparing with library ones.

      Sewed on soap. But they called AI.
  8. kunstkammer
    kunstkammer 13 August 2016 10: 04
    given that the moral nature of a person does not change not only during his life, but even with the "progress" of social relations, one can expect with the advent of AI ... a decrease in the number of commentators on the site.
    Instead of putting the opponent “-”, you can simply give the command AI-that decompose the vile into atoms, or create an apple cube from it.
    Simple and convincing.
    Whatever a person does, you’ll still get a Kalashnikov assault rifle. True, well, very big. :-)
    So much for the transition of the author’s philosophical theme to the specific theme of our site.
    1. gladcu2
      gladcu2 13 August 2016 18: 18

      You’ve made a mistake, buddy.

      Man is controlled by changing his morality. There would be no war and disagreement in the world if everyone had common moral values.

      You can get a super country if people instill common moral values.
  9. pimen
    pimen 13 August 2016 10: 09
    If we talk about some kind of qualitative leap, then I would not associate it with AI, or even with technology, but with an increase in the efficiency of civilization, its rationality. For at the moment, only its decrease is observed. The main reason for this is not even the exponential growth of an inefficient technogenic environment, but the very principle of cognition, when we first plunge into a problem and then begin to search for at least some solution.
    1. Ajevgenij
      Ajevgenij 13 August 2016 17: 44
      I agree with this thought. I think the same way.
  10. chunga-changa
    chunga-changa 13 August 2016 10: 35
    We will leave the question as to whether it is possible in principle to create a FIS and whether it will be created, say yes.
    The first question about the interaction of FIC is man. For clarity, let us imagine SII as a person, and man as a dog.
    Let's put aside the case when the dog simply subjugates the owners and forces them to act more or less in their own interests, despite the "levels of superiority", we also need will and character, we are not concerned with this now.
    You can get a dog for one purpose or another, train and teach certain behavior, then you go to work, and when you come home you see your favorite sofa torn to shreds, what is it? She so wanted in spite of the training and education. Does this behavior bring you any inconvenience - yes, will you fight it - yes. Apparently SII, having encountered a similar one, will begin the selection and breeding of human breeds with the necessary characteristics and parameters. Will this person be the same as now? Yes, but the difference will be something like between a wolf and a poodle. By the way, looking at the diversity of races, ethnic groups and their characteristic features, in my opinion it can be said that SRI has existed for a long time and is breeding humanity in some of its interests.
    The second question is competition. As you know, intraspecific struggle is always tougher than interspecies. If we assume that several independent centers for the distribution of FIS will be created simultaneously, and this is most likely the case, then it is logical to expect competition and intraspecific struggle from them. It is difficult to imagine what form it will take and whether its goals will be accessible to the understanding of people. Will people be used in this fight, most likely, yes. In what form, I think, as usual, in the form of "extermination of enemies". Incl. the future will be interesting, but most likely not so cloudless.
    1. bootlegger
      bootlegger 13 August 2016 18: 19
      With the same probability, it can be assumed that future interaction with FIS will be interesting for people for a very short time. If you use your analogy with a dog, then its use by a person eventually disappears. At the initial stages of development, the dog was actively used by man, and with the growth of technological progress, the need for a dog gradually disappears.
      If, however, the improvement of FDI does occur exponentially, then very soon FDI will cease to need us. We will quickly move from the category of a dog to the category of insects. Well, then we are waiting for analogues of dog shelters or just liquidation. Why is this a force to drag us somewhere into the bright future is completely unclear. Just for the sake of a short experiment ...
      Maybe all this is, from the very beginning of evolution, an experiment with the creation of another FIS?
      1. chunga-changa
        chunga-changa 13 August 2016 18: 33
        Yes, generally correct. I think most likely we will simply be abandoned on the planet, and the SRI will rush "into unknown distances", then there is simply no need for it to sit on the ground.
        But this is true in a spherical vacuum. I think that some "three laws" will initially be built into any system, strongly restricting or directly prohibiting certain actions and even apparently reflections. Otherwise, it makes no sense, the commercial return from the project will be zero or negative, a lot of money will not be given for this.
        1. bootlegger
          bootlegger 13 August 2016 19: 54
          If they just quit, then this is not bad. Let's hope that the need for FDI in the resources of our planet will not become critical for us.
          Then there is some hope that in this case we will have the remains of near-singular technologies, then theoretically we can, by upgrading human intelligence, pull our individual representatives to the superhuman level. Of course, they, too, will all go into the unknown distance, but this is at least some kind of hope))
    2. gladcu2
      gladcu2 13 August 2016 18: 23

      It seems to me that if you wait, then in the next publication that question that you raised will be analyzed.

      Those. the presence and form of God. A very significant question. Moreover, absolutely everyone in the world was faced with manifestations of fate.
      1. chunga-changa
        chunga-changa 13 August 2016 18: 36
        You know, I don’t really believe in all this, fate, Gd, etc.
        The creation of AI is an area of ​​applied science, mathematics, philosophy and ethics, issues of faith are somehow out of place here.
        1. gladcu2
          gladcu2 13 August 2016 19: 09

          Manifestation of fate is an objective thing and does not depend on your awareness. Those. For example, when analyzing my life, I become more confidently aware that what I came to was inherent in me from early childhood, as far as I can remember. Those. I actually see myself as a biorobot, which performs the will of others with an unchanging sequence.

          This is called fate. Hence the question. And who needs all this? Mozh already exists that intelligence that is predicted by 2025. And most likely he will not allow himself to create competition. Although it may also want to have an equal interlocutor. :)

          Oh ... no faith ... Even close. Religious issues are not from the field of AI. There is something completely different.
  11. Lord blacwood
    Lord blacwood 13 August 2016 11: 07
    The author does not understand the essence of what is happening. ISI will never appear on Earth, since a person cannot do something smarter than himself.
    Yes, it will be possible to create a robot that will look like a person, copy his actions, answer questions, talk, but he will do all this in accordance with the program that people wrote. AI can not fantasize and feel like people, since this is a set of programs that people asked him to.
    For example, a humanoid robot will be able to smile, but it will smile not because it is joyful for him, but because this action is set in the program. That is why it is unlikely that someday the AI ​​equal to us will arise, and the ISI will never arise, since this is limited by the capabilities of our brain.
    1. gridasov
      gridasov 13 August 2016 12: 36
      What kind of artificial intelligence can we talk about when no one understands that there is a mechanism for the perception of information, its processing and application by the human brain. And this mechanism, like its work, must be described in a certain language that cannot be associated with any fantasies or figurative expressions. This is the language of numbers. And mankind does not even possess all the properties of numbers, namely, the function of its constant value. Therefore, first people will appear who will become much more capable in analyzing the events surrounding them, and only then will it be possible to see the basics of such an analysis that can be repeated in machine reproduction.
  12. atos_kin
    atos_kin 13 August 2016 11: 26
    Only the exam will save the world from the emergence of ISI. laughing But if the ISI arises, it will first of all abolish (destroy) private ownership of the means of production. And humanity will take for itself an ally, not a partner.
  13. vladimirvn
    vladimirvn 13 August 2016 12: 45
    The first thing that robots will do, having come to power, will give a person everything he wants. Lots of food, idle life, etc. And a person will degrade over time and turn into an animal. Already, you need to clearly understand where you can let the robots, and where not. Humanity in the interspecific struggle with robots is doomed to defeat.
  14. Falcon5555
    Falcon5555 13 August 2016 13: 28
    This entire series of articles looks like RenTV.
    Man differs from ape and other animals in the presence of abstract thinking plus language for communication, recording and learning, and the words of the language denote abstractions and we usually think with the help of language, or at least using abstractions for which there are names in the language. We also have a biological motivation to think harder than a neighbor or a beast in the forest - we "want" to live pleasantly "in chocolate", and not to starve, freeze, get sick and die alone.
    We do not know if there is anything higher than abstract thinking, most likely there is nothing higher. Therefore, what is the conversation about? What else is JII? If a computer masters abstract thinking and has a motivation to think independently, this does not mean that it will immediately overtake a person and will be an existential threat to him. There is a danger, but do not exaggerate. It is necessary to carefully limit the powers of powerful computers, for example, not to give them nuclear buttons and not to give control over communications. Then it will be just smart cars, and that’s it.
    1. gladcu2
      gladcu2 13 August 2016 18: 28

      When a person does not think about the need for survival, he redirects his efforts to creativity.

      Therefore, no matter how good you are, adventures are sought.
  15. Karina87
    Karina87 13 August 2016 15: 16
    Quote: Falcon5555

    We do not know if there is anything higher than abstract thinking, most likely there is nothing higher.

    And where do you get such confidence?
  16. Prince of Pensions
    Prince of Pensions 13 August 2016 16: 12
    But if the answer is, why are so many smart people in this world worried about the future of artificial intelligence? Why does Stephen Hawking say that the development of ICI "can mean the end of the human race"? Bill Gates says that he "does not understand people who are not concerned" by this. Elon Musk fears that we "call on the demon." Why do many experts consider ISI the biggest threat to humanity?
    Many. Yes, Ponte is all from thoughts of exclusivity. They are affected by this virus. Shaving their throats cut themselves or tying a tie, strangle themselves. Dae bi ly.
    Our will implement and apply. Without blah blah blah.
    1. Greenwood
      Greenwood 14 January 2017 11: 15
      Quote: Prince of Pensions
      Our will implement and apply. Without blah blah blah.
      So far, our only tenders have learned to announce and cut the budget under loud "blah blah blah" about "having no analogs in the world."
  17. gladcu2
    gladcu2 13 August 2016 18: 32
    An interesting series of articles. Well translated. A typical Western way of constructing sentences. Good publicistic style. A minimum of academic "boring", appealing with hard-to-remember terminology.

    Thanks to the author.
  18. ICT
    ICT 13 August 2016 19: 42
    let's define the term we will describe AI , and it will be like there

    1. gladcu2
      gladcu2 13 August 2016 20: 00
      The author has already decided on the terminology. There should be no alternative.
  19. ICT
    ICT 13 August 2016 20: 27
    Quote: gladcu2
    with terminology

    Quote: gladcu2
    The author has already decided on the terminology.

    I don’t want to understand what physically on the chart ran so high up the ladder

    1. skyscraper and a lot of super computers cooling spark. blood?
    2. a certain android from screamers?
    3. or is my PC standing near the stove a neural node of some higher being?
  20. japs
    japs 13 August 2016 22: 30
    Like the first part of this article, the second one in my mind is bullshit.
    And this site has nothing to do, even on the topic "Terminator".
  21. srha
    srha 13 August 2016 22: 34
    One thing only reassures that ISI specialists with such "knowledge" will never even build AI.

    Explain: "Oxford philosopher and leading AI theorist Nick Bostrom thinks ..." All species are dying out. " Well how can you talk about it, i.e. to report that somehow studied this issue and at the same time not know about prokaryotes - immortal unicellular (have domains, classes, families, genera and species) that do not die, but divide and share for more than 4,5 billion years !! !

    Also, they are completely unfamiliar, judging by the lack of mention, with the stability of systems, the importance of the social and material environment on the activity of the mind, and even the simple, and therefore universal, rule that reconstruction is more complicated than construction - this means that any self-development (reconstruction ) ISI is more difficult than ISI capabilities, and therefore not possible in the long run. Mankind overcomes this obstacle at the expense of sociality, these same experts seem to not guess about it (maybe in the third part there will be). By the way, due to this sociality, the ISI will be forced to socialize, like every person, and it will be viable if it is successfully socialized, and not the monsters that scare Hollywood science fiction.
  22. NOTaFED
    NOTaFED 14 August 2016 09: 39
    The article is a retelling of COMPILATIONS from a handful of sources written by a certain Tim Urban.
    Who is Tim Urban? Answer: "Tim Urban. American blogger."
    A blogger, just a blogger, that is, a person whose job is to write articles on the internet.
    Accordingly, "Blogger believe that water is measured with a sieve."
  23. voyaka uh
    voyaka uh 14 August 2016 12: 22
    The article is serious.
    I will give an example. When I was a boy, me
    taught a little play in Go (at the most basic level), the Japanese game.
    He taught at the dacha a distant relative, a mathematician, now retired, after working at Harvard.
    I remember then (back in 1970) he said: "Perhaps they will come up with
    a computer that will play chess at the level of world champions.
    But never, NEVER come up with a computer that will beat a man
    in Go, since the number of options and the degree of abstraction in this game is innumerable. "

    This year, Comp confidently - and several times - outplayed the world champion in Go.
    1. gridasov
      gridasov 14 August 2016 12: 47
      that mathematics embedded in the computer must correspond to distribution functions, they call me stupid, but then you are respected, voicing questions of multivariance. Moreover, others exclude the possibility of general mathematical analysis based on the function of constructing algorithmic mathematical relationships and rested on computational mathematics as a panacea.
      Therefore, the mathematics of human reasoning is not calculated and built on integral and differential calculi. It is distributive !!! And it allows you to build a relationship both by vector and by potential. It is much simpler and completely accurate in its task of obtaining answers reflecting reality as it is in analysis.
      There is nothing surprising in the fact that a computer can beat Guo. Just count the totality of the spent resource on such a person and a computer. Well, just like children!
    2. ICT
      ICT 14 August 2016 15: 12
      Quote: voyaka uh
      This year, Comp confidently - and several times - outplayed in Go

      , but can this same computer (program) play chess with me or just play poker? I think no ,
      1. voyaka uh
        voyaka uh 14 August 2016 16: 58
        It is easy to put a lot of applications (programs) into one computer.
        For chess, go, poker and thousands more. He will make a "switch"
        to a new task (exactly how our brain does it) and - go ahead.
        And he will replay you (and me too), there is no doubt.
        Such programs are self-learning. First, the computer copies your tactics,
        style, techniques (self-learning) and begins to apply them against you. You play with yourself
        only improved (speed and accuracy of execution of the computer is obviously higher).
        And the computer starts to press you with your own methods. In addition, he also has in stock the memory of previous opponents.
        And he can, if tired of torturing, "knock out" and someone else's reception.
        But he will definitely play "cat and mouse" - suddenly learn something new?
        In general, AI is not some kind of personality with a large IQ, but extremely vile, cunning, unscrupulous like a mirror and a ruthless type sad
        1. fix
          fix 16 August 2016 15: 56
          Having outplayed the world champion, he ran into the ceiling - there is no one to learn from. Will play with himself? Or to teach someone? Or come up with a new game?
          Or wait forever until someone sits opposite again
      2. The comment was deleted.
  24. ICT
    ICT 14 August 2016 21: 19
    Quote: voyaka uh
    And replay you (and me too)

    and in such a game can win?
  25. theone
    theone 14 August 2016 22: 21
    And why is AI used in the singular?
    Where there is one, there are thousands of entities.
    And who of them will be on our side, and who is against, we will still see. Competition and dominance, you know ...

    And the game of meanness, fraud, vulgarity and flattery mankind is trained to perfection .. Let's play with them (AI) this game.
  26. theone
    theone 14 August 2016 22: 46
    And then, it has long been said - for every sage is quite simplicity.
  27. TOPchymBA
    TOPchymBA 15 August 2016 13: 47
    Quote: gladcu2
    That is, they simply changed the concept of antivirus software. If the former scan the computer for the presence of library "markers", then the Tomsk software should scan for malicious code. Probably comparing with library ones.

    Sewed on soap. But they called AI.

    I agree already. Called Heuristic Analysis.
  28. Dali
    Dali 22 September 2016 23: 35
    In a certain part of the article, yes, delusional slons ...

    I will explain my opinion:
    1) In order to create a real AI of the level of a person (existing now), one must be smarter, more intelligent than a person (existing now), in fact, rise to a level higher, to the level of a superman (or a level, as in this article, ISI).

    2) Therefore, this event is not close, far from close, although it will be - probably most of the Formuns are familiar, at least from articles on the internet, with the phenomenon as "indigo" children, who are considered people of the future, people with superintelligence. Those. these are those who can actually build AI with the level of a living person.

    3) To be a self-developing object, one must become at least independent in the material world, one must learn to feel the material world, understand at least to some extent one's needs ... a pure "computer" algorithm, no matter how complex it is, will not even have parts of information for self-development without going through the path of becoming independent in this material world, i.e. ways of human development. Of course, a person can help in development in this direction, but even then and only if see paragraph one.
  29. zenion
    zenion 22 October 2016 20: 28
    People are not needed now. I don’t remember the author, but I remember the title of the book "CD - a cybernetic double" The book was written in the 70s and was unusually interesting. Superintelligence will look at a person like a person at a dog, at a very funny but not understanding creature. They say, you can see from the eyes that he understands everything, but he cannot say.
  30. gridasov
    gridasov 24 October 2016 18: 05
    Man lives in a world of perception at the level of objectivity. That is, we all perceive objectively - this is this or that. But everything in this world is inherent in processuality. That is, everything is in the process of transformation. Therefore, a worldview built on the principles of the inseparability of one from the other creates reality and the adequacy of perception of reality. This is the basis of mathematics based on the function of a constant value of a number, which allows combining a multitude and at the same time individuality. Without the basics of this knowledge, AI can’t be close
  31. Krabik
    Krabik 11 November 2016 09: 43
    This Kurzweili reminded me of Chubais, also a deceitful and cunning quack.

    Attempts to make a person immortal are needed to knock out money from the old rich who, from their last strength, clutch at the passing life.

    The problem of aging is well known and obvious - these are errors in cell replication and a set of excess debris in the chromosomes with each cloning.
    If, in some way, eliminate errors - this will stop evolution.

    Discussion of the reasonableness and superintelligence of search engines and their endowment with intelligence, the same does not hold water, all they can do is collect and give out the thoughts of other people.

    Actually, the development of mankind has gone faster due to the rapprochement of people in an Internet-based society and now people do not invent bicycles in every isolated society.

    In short - a storm in a glass!
  32. ML-334
    ML-334 7 August 2017 19: 53
    The Lord introduced the intellect into the monkey, it turned out to be man. This is how we live, show off like God has and the essence of the monkey. Keep the commandments of God — the gates of Reason will open.