Reboot AI: Building Artificial Intelligence

Reboot AI Building artificial intelligence you can trust: awesome summary by ebookhike

85 / 100
Reboot AI. Building artificial intelligence
Author: Gary Marcus, Ernest Davis

Author: Gary Marcus, Ernest Davis

Reboot AI: Building Artificial Intelligence We Can Trust Gary Marcus, Ernest Davis 2019

Three illusions (Reboot AI. Building artificial intelligence)

Reboot AI Building artificial intelligence: For half a century, humanity has been familiar with the concept of artificial intelligence (AI), and for half a century, scientists regularly say that a little more – and the problem of computer intelligence will be solved, a new super-smart assistant will come to the aid of people. At first glance, this is exactly what happened: Internet searches, electronic translators, face recognition systems that contain hundreds of useful functions of smartphones, robots involved in medical operations – all this proves the intellectual power of computers. AI is becoming a national issue: China alone plans to invest $150 billion in its development by 2030. The McKinsey Institute has calculated that the total economic effect of AI could be $13 trillion, which is comparable to the economic effect of a steam engine in the 19th century – if, of course,

But are they really smart? The more AI enters our lives, the more tasks we delegate to it, and the more we understand that the intellectual capabilities of machines are still exaggerated. Robots, which, according to futurologists, will deprive millions of people of work in factories and plants in a decade, are still showing modest success, and then only on carefully choreographed demos. Computer programs have learned how to compile news, but they cannot distinguish true information from fake. Tests of unmanned vehicles are in full swing, but they are accompanied by human casualties, including fatal ones. Futurists say IBM’s Dr. Watson will soon replace interns, but so far he’s making diagnostic mistakes that a first-year medical student wouldn’t. The face recognition system threatens total control,

We are increasingly trusting machines that are too unreliable so far. Billions of dollars are being spent today on technological solutions that tomorrow will lead to deliberately incomplete results. We got more than we hoped for, but less than we could have. But we continue to believe in AI. Three illusions influence the perception of the computer mind:

1. We humanize AI if it shows at least minimal rudiments of intelligence. It’s fun to ask “Alice” about everything, listening to her awkward jokes and inaccurate answers, but it’s worth remembering that the voice assistant does not really answer us: it reacts to signal words, not to meanings. It’s convenient to trust a self-driving car, having been carried away by a movie while the car is taking you to the address, but it is important to keep in mind that the drone is still very bad at distinguishing obstacles on the way. A horrifying incident happened to a hapless Tesla owner whose car drove right under a truck trailer crossing a highway and killed its owner.

2. We believe that if the computer has coped with one task, then it will cope with another, more difficult one. When in 2016, the brainchild of Google AlphaGo almost dryly beats super-go player Lee Sedol, humanity was amazed: people lost the battle for intelligence. However, AlphaGo’s successes ended there: it cannot play other games and cannot set itself other intellectual tasks. All AlphaGo can do is play Go.

3. We believe that if a technology solution works for a while, it will continue to work. It is relatively easy to create a demo of a driverless car that is able to drive on an easy track in good weather. The problem is its adaptability to changing conditions. None of the developers can guarantee that driving around Bombay in heavy rain will be as successful.

Are machines really capable of reliably performing the tasks we entrust them with? Are they able to correctly understand our orders? The answer to both questions is negative. And it gives rise to a third question: why did it happen?


Deep learning and its disadvantages

Today’s AI stands on two pillars: deep learning and big data. However, at the dawn of the creation of artificial intelligence, in the 1960s, neither the first nor the second was discussed. Computers were low-power, the Internet with its ocean of information did not exist. AI pioneers followed a very laborious path: relying on accumulated knowledge and common sense, they first formulated one or another algorithm of action to achieve some goal and then turned it into a program code – literally taught the computer to think. This approach is still used in robot route planning and GPS navigation. However, gradually the idea of ​​hand-coded knowledge was supplanted by the concept of machine learning using neural networks.

The concept of neural network 1  was described as early as 1943 by psychologist Warren McCulloch and mathematician Walter Pitts. In 1958, psychologist Frank Rosenblatt put it into practice: he created a perceptron – a model containing about a thousand connected “neuron cells” that could receive signals from 400 photocells. Such a neural network was still single-layer, simple, but over time it only improved. In 1982, John Hopfield created a network in which “neurons” were able to independently change their parameters. In 2007, Jeffrey Hinton created deep learning algorithms for multilayer neural networks.

The word “neurons” is not accidental: the structure of a computer network is indeed similar to the structure of the human brain, in which many neurons are connected by many connections. If nerve cells die, the mental activity of a person suffers; if there are few-electron neurons (as in the Rosenblatt model), the computer model is weak. The more neural layers are involved in the work, the deeper the network, the more efficient the work (hence the term “deep learning”). And the more data neural networks receive, the faster the train. Until there was big data, this mechanism existed only in theory.

A turning point happened in the 21st century: we began to drown in information. In 2016, humanity produced a thousand times more content in a second than is contained in all the books ever published. Paradise has come for neural networks. Deep learning has become a cornerstone of AI. Facebook uses it to decide which posts to show us in the feed. Amazon uses them to recommend products to us. Alexa uses deep learning to decipher our requests. Thanks to deep learning and neural networks, the world has become more convenient and simpler, and neural networks train themselves – what’s wrong with that?

Deep learning has three disadvantages:

1) it requires a huge amount of data (AlphaGo took 30 million Go games to achieve superhuman performance), and with minimal information, it works poorly. The more the real state of affairs differs from the data used to train the neural network, the more unreliable the result will be;

2) it is not transparent. Working with huge amounts of data is beyond the mind of people: we cannot understand why the system decided this way and not otherwise. Her work is not limited to intelligible principles such as “if a person has an increased number of white blood cells, it is worth suggesting an infection.” And it does not correspond to natural knowledge about how the world works. Therefore, the neural network is able to recognize a bridge or a trailer by comparing the corresponding pixels, but it does not see a fundamental difference between the one and the other, as evidenced by the example of a Tesla car driving under a trailer;

3) it is limited. A neural network can study a million images of pink piglets, but it cannot identify a black piglet in the first million images. The obvious solution to the problem is to increase the training sample. However, training a neural network for distortions of one type does not guarantee against distortions of another type, and it is impossible to enumerate the variety of physical objects.

Two key skills, the mastery of which would indicate that AI has become like the human mind, are reading and the ability of robots to replace humans in various areas of life. How are things here?


Looks in the book – sees …

The amount of information increases many times every day, even narrow specialists do not have time to get acquainted with all the news in their field. It would be great if AI came to the rescue here. It seems to be ready: in 2018, Ray Kurzweil 2  announced the Google Talk to Books project. According to Kurzweil, GTB should “turn reading books into a fundamentally different process.” And so it turned out, only the words “fundamentally different” did not mean what the futurologist meant. The book collection collected in electronic memory did not help GTB grow wiser. When asked where Harry Potter met Hermione Granger, the system returned only six of the 20 answers related to Harry Potter – the rest mentioned some other Harry. With the question of who was the Chief Justice of the US Supreme Court in 1980, GTB also failed (and the answer is easily found by any search engine). It also turned out that the answers of the system are very dependent on the wording of the question. If you ask GTB who betrayed his teacher for 30 pieces of silver, only three answers out of 20 points to Judas. And if you ask who betrayed his teacher for 30 coins, GTB will remember Judas only in one of 20 cases. Answering the question “Who sold his teacher for 30 coins?”, GTB does not mention Judas at all.

But reading is not limited to learning short answers. What about story stories? The computer was presented with a simple text: “Two children, Anna and Eric, went for a walk. They both saw a dog and a tree. Eric saw the cat and pointed it out to Anna. She decided to pet the cat.” The computer can easily answer the direct question “Who went for a walk?”, However, it cannot answer questions like “Did Anna see a cat?” or “Were the children afraid of the cat?” When we read texts, we grasp such information without difficulty: we already have a certain image of children and a cat, we understand the child’s possible reaction to a cat, the likelihood of fright, etc. AI, on the other hand, has no idea about a cat, children, or fright. We operate with general ideas about the world and background knowledge, and the computer only with probabilities. He knows how to analyze how often words occur together in certain contexts, but that’s all there is to it. Strictly speaking, the computer has no idea at all about how the world works.

Even the search engines that have improved rapidly over the past decade should not be overestimated: Google does not need the skills to understand the searched texts at all, it simply sorts through its database of the most relevant results. Misses are inevitable: for example, the search engine easily misses the word “no” past its ears. When the scientists asked Siri to find “a fast food restaurant but not a McDonald’s” in the area, the system returned three McDonald’s in the area. From the point of view of AI, everything is logical: hardly anyone calls KFC “non-McDonald’s”.

But what about the Watson supercomputer that beat the best players in the Jeopardy game show? The victory says nothing about his mind: 95% of the answers were Wikipedia headlines. Apparently, this is why the victory in Jeopardy has remained the main one in Watson’s career: IBM has not yet turned it into, say, a reliable virtual assistant.

Reboot AI. Building artificial intelligence

AI lacks knowledge of the linguistic principle of compositionality – the ability to extract the meaning of a phrase from the meaning of its parts. Working with the sentence “The distance from the Earth to the Moon is 384,400 km”, the system does not recognize phrases related to two astronomical objects and the distance between them – it only relies on an unstructured search for a huge number of correlations in a huge dataset.

Problems with McDonald’s and understanding the phrase about the Earth and the Moon could be solved by classical AI methods. In the first case, building a list (fast-food restaurants in a certain area) and then excluding items that belong to another list (a list of McDonald’s franchises) would help. But list building is not part of the deep learning framework. In the second case, it would help to make a template like “the distance from place 1 to place 2 is …” used to identify phrases that determine the distance between two places. However, each such pattern must be hand-coded, and it does not work with slightly modified sentences like “The moon is about 240,000 miles from the Earth.” The language is too varied and flexible to be coded in this way.

Mankind has accumulated mountains of knowledge. Physiologists know a lot about the ways in which visual images are formed. Linguists know a lot about the structure of language. Physicists have a lot to say about how robots move. But how to teach this AI is not clear: we simply do not have a language into which this knowledge could be translated. The deep learning method helps AI quickly learn mountains of information but has no idea about compositionality. Classical AI techniques can give the computer a sense of compositionality but are extremely time-consuming and not entirely reliable.


The Rise of the Machines is delayed

YouTube is full of demos of robots performing a variety of tasks. The keyword is the demo: as a rule, we are dealing with a story that was filmed for the tenth time and in a strictly specific setting. The likelihood that we will soon have robots capable of performing a wide range of tasks – from painting walls to wrapping gifts – is zero. The most successful project of this kind is the robot vacuum cleaner. Oh yes, and drones, which are very helpful to operators – but drones don’t have to lift things, manipulate objects, or climb stairs.

The robots showed maximum capabilities during the aftermath of the nuclear accident at Fukushima-1 in 2011 (they could build the best route, and correct their actions), although they were mainly controlled by radio operators.

The key issue here is reliability. If the social network algorithm makes a mistake and gives users a post that is not provided for by the settings, anything bad will happen. But the robot nurse cannot act in the “nine out of ten good decisions” mode. A successful robot must be able to calculate five parameters:

1) where is it located;
2) what is happening around;
3) what he should do right now;
4) how he should implement his plan;
5) what he must keep in mind in the long term to achieve the goal.

It is enough for the robot vacuum cleaner to calculate parameter No. 3; he will continue to vacuum in the center of the hurricane. The rest are more difficult. Engineers managed to solve the problem of the orientation of robots on the ground: modern machines can not only use sensors to see the surrounding space but also correct their assessment of their position by adding objects that they have not seen before to the mental map. Great progress has also been made in controlling the movements of the robot (arm rotation, walking, etc.), and this is one of the most difficult tasks in the industry (and such a familiar action as grabbing a cup of tea with two fingers requires a series of complex manipulations in which we, humans, we don’t realize: move different parts of the arm and hand so as not to stumble on the table, apply enough force to the handle of a teacup, etc.). The movements of Boston Dynamics robots are very similar to those of animals, their programs instantly and continuously update information in the robot’s “muscles” so that they can act more flexibly in the environment (instead of just following a planned program). These robots can walk on uneven surfaces and climb stairs, although so far under carefully controlled circumstances.

Robots in demos that fold towels “for some reason” always operate on a dark background, while linen has a suspiciously bright color. But what about dim lighting and mismatched towels?

Reboot AI. Building artificial intelligence

True control over the actions of the robot requires that he not only navigates the terrain but is aware of what can happen around him and how to react to it. Is a storm coming? Can the stove catch fire if you forget to turn it off? Add here a thousand more household trifles if we are talking about a robot housewife. In a factory, situational awareness is a relatively solvable problem because the factory is a tightly regulated closed system. But an ordinary apartment is not. Yes, object recognition is AI’s forte, but in real life, there are many nuances: the greater the change in lighting or, say, the more things in the room, the more likely the robot is to make mistakes. In addition, object recognition systems are far from always able to understand the relationship between objects in a scene: a fire in a fireplace and a fire devouring curtains are two big differences, but current robots are unaware of this. Let’s add here information about different types of houses (concrete, wooden …),

Elon Musk rightly explained the initial difficulties in the production of the Tesla Model 3 as “too much automation.” It was probably about the fact that even in the factory, production was too dynamic a process that robots could not keep up with: their programming was not flexible enough.

Reboot AI. Building artificial intelligence

In the seventh decade of robotics, designers can state that they have managed to teach robots to understand where they are. But assessing the work situation and being able to adjust actions to account for unplanned changes remains a technological dream. The world is too complex and the robots are too inflexible. If the uprising of the machines does take place and you hear about it on the news, just go to the door and check if it is closed. Everyone, you are safe. The lock can not be latched: modern robots cannot even handle the doorknob.

All this means one thing: by absorbing gigabytes of information, neural networks do not learn in the full sense of the word. The data they process is not connected by causal relationships and does not turn into complex cognitive objects. Deep learning is good when the rules are well defined and the information is consistent. The game of go or chess is very difficult, but still, they are closed systems with certain, strictly established rules. A highway or an ordinary room with dozens of different-sized objects is a completely different matter. Every cat looks different, but it’s not good for a robot housewife to know about each individual cat and what it should and shouldn’t do with it. He must have a general understanding of cats. Modern AI can’t think like that.

The power of common sense

Why are we smarter than a computer?

It is generally accepted that the human mind is inferior to the computer. We get tired, we get distracted, we often forget important things, and we are prone to emotions. And yet our intelligence is stronger than artificial intelligence. We can perceive the world around us in a complex way, easily adapt to circumstances, and reason with incomplete information. How do we do it? For a number of reasons, it would be nice to train AI.

1. Our knowledge of the world is based on propositions, and AI operates only with a lot of data. Propositions are the meanings that by default stand behind our judgments and are accepted by speakers by default. Saying “The sun is rising,” “The sun is rising?” or “The sun is rising!”, we still mean the same deep meaning: we know that the sun is rising, this is a fact about the world, formed in our minds according to the laws of the grammar of our language. For AI, there is neither the world nor immutable knowledge about it, it operates by sorting through many options, and therefore the denial of it is indistinguishable from the assertion (recall the example of searching for McDonald’s in the district).

2. Our knowledge of the world is based on abstraction and generalization – AI is deprived of this. Without noticing it, we use a huge number of extremely abstract concepts: “beauty”, “checkmate”, “Marxism”, “gravity” – all these are products of the mind, we cannot touch them with our hands, but all this significantly affects our attitude.

3. Our thinking is a complex structured complementary phenomenon, so we are able to solve problems of varying complexity. Psychologist Daniel Kahneman identifies two types of thinking: fast, which is activated by automatic reactions, and slow, which is involved when we reason, and solve problems. The trick is that we use different kinds of cognition for different problems. At the same time, there are no separate areas in the brain that would be responsible for individual actions, different parts of the brain are combined into different patterns to perform a specific task.

All this is contrary to the current trends in the development of AI. When NVIDIA (the world leader in interactive graphics) created the Nvidia driving model, it moved away from perceptual, predictive, and decision-making models to focus on a single, relatively homogeneous network with direct correlations between raw data and control instructions. Easy to create, but not very effective in practice: such systems work without driver intervention for only a few hours, on the road they can only keep lanes, and no more.

Reboot AI. Building artificial intelligence

4. Understanding the world relies on both immediate feelings and background information. Therefore, our knowledge is complex and ambiguous. Taking someone else’s is not good, but Robin Hood is cool. Killing people is unacceptable, but the terrorist must be eliminated. But how to write it in the program code of the police robot?

5. We easily integrate knowledge into the existing picture of the world. When a child sees a photo of a turtle for the first time, he can immediately recognize not only other photos of turtles but also turtles on video and in real life, distinguishing them from cats or kangaroos. He can generalize that turtles, like other animals, breathe, eat, and reproduce, are born small, grow, and die. Not a single fact about the world exists separately, it is always built into some kind of theory – this is the power of knowledge. And this is something that deep learning does not do.

6. We have an idea of ​​causal relationships and are able to distinguish them from correlations. Knowledge of causality also helps when we do not fully understand the mechanism of work. We take aspirin because we know the principle of its action, knowledge of biochemistry is not needed here. We also know how to distinguish true causality from imaginary one, but AI cannot. Recently, in Germany, a correlation was found between the decline in fertility and the decrease in the number of pairs in storks. The two curves that reflected the course of these processes between 1965 and 1987 surprisingly coincided. Well, fewer storks mean fewer children? A computer would have drawn that conclusion.

7. We keep track of things and notice subtle differences. Anna’s husband used to work as a journalist, and now he decided to try his hand at design. The grocery supermarket on the corner used to be nothing but deteriorated over time. Our life consists of hundreds of such facts, connected by thin strong threads – this is experience. AI, on the other hand, argues differently, focusing not on individual facts, but on categories: “children mostly prefer broccoli sweets”, “cars have four wheels”.

Yes, deep learning systems are good at recognizing images of individuals – but this is also working with categories, not with people. It is easier to train the system to recognize photographs of hockey player Alexander Ovechkin than to force it to conclude from several years of news reports that the athlete played for Dynamo for four seasons.

Reboot AI. Building artificial intelligence

8. We do not learn from scratch, and the principles of AI are built on self-learning with the help of information loaded into it. This often works—for example, when AI tags video content—but don’t expect too much from it: a video surveillance system can tell the difference between a video of a person walking and a video of a person running, but it can tell the difference between unlocking a bike and stealing she can’t bike.

Thinking is a complex phenomenon. Psychologists have made attempts to find a universal master key to it: for example, behaviorists in the middle of the 20th century reduced all the richness of the human psyche to the “stimulus-reward” mechanism that underlies controlled learning. But rather quickly, behaviorism gave way to cognitive psychology, which focused on the study of complex phenomena of memory, perception, and attention. The current concept of deep learning is nothing but supervised learning from behaviorist theory. However, AI, whose work is built only on big data without understanding the complex abstract knowledge about the world, coupled with the causality of phenomena, will never be intelligent in the full sense of the word.


How to teach a computer common sense

The ability for abstract thinking, the idea of ​​causal relationships, and other features of our mind add up to what is called common sense. It turns out that teaching him computers is extremely difficult.

  • The first way is to do it manually. One of the largest projects of this kind is the Never-Ending Language Learner, launched in 2011. Day after day, systematically and methodically, researchers analyze documents on the Web for common linguistic patterns. If the scientist sees a phrase like “cities like New York, Paris, and Berlin”, he teaches the AI ​​that New York, Paris, and Berlin are cities. Another project, ConceptNet, led by the MIT Media Lab, has outsourced AI training. The project maintains a website where volunteers enter simple common sense facts in English. For example, the participant may be asked to provide facts that will be relevant to understanding the story “Mark caught a cold. Mark went to the doctor”: the story is supplemented with information like “people with a cold sneeze.” Then the phrases are automatically converted into machine encodings – this is much closer to the methods of classical AI. True, a lot of information that is too obvious for people like “after something has died, it will never be alive again” still remains overboard. The devil is in the details…

Apparently, there is no one universal way to adapt common sense for AI, but the task does not look like an unsolvable one at all.

  • The second way is taxonomy 3. It helps to cover a significant part of the data, especially since Wikipedia is full of already classified information. But if everything is clear with animals or plants, then how to classify phenomena like the “Reformation” or “the military operations of the USSR in Finland in 1939”?
  • Yet another piece of data can be covered by diagrams known as semantic networks. They can represent a much wider range of concepts: not only what parts makeup what whole and what categories are inside other categories, but also kinds of relationships like “Saratov stands on the Volga” or “policemen are people who drive police cars.” But often semantic webs get bogged down—especially if you need to represent temporal relationships. The semantic web might contain the information that hockey player Alexander Ovechkin was born in Moscow, he is 190 cm tall, etc. But the AI ​​can easily decide that Ovechkin was 190 cm tall when he was born. If we specify that Ovechkin has been playing hockey from 2001 to the present, the AI ​​can assume that the athlete has been playing hockey 24 hours a day, 365 days a year for the last 19 years.
762_image_1
Reboot AI Building artificial intelligence you can trust: awesome summary by ebookhike
  • A more flexible approach is offered by formal logic. The construction “All P is Q. R is P. Therefore R is Q” helps to fill in many semantic gaps (“Hockey players do not play hockey 24 hours a day. Ovechkin is a hockey player. Ovechkin does not play hockey 24 hours a day”). However, formal logic has remained the lot of classical AI and is not used in deep learning.

All of these methods should work together for one key goal – to allow computers to consider individual facts as examples of more general semantic relationships. Speaking about the most important structures of knowledge that should be the basis for AI, it is worth following the philosopher Immanuel Kant to recognize the fundamental nature of the categories of time, space, and causality. But even here there are obvious difficulties:

  • To create a system that could figure out when Ovechkin played hockey and when he didn’t, it takes more than an abstract notion of time: for example, the common notion that “a person can’t perform complex skills effectively in their sleep,” combined with specific facts of the life of an athlete.
  • Euclidean space is well studied by AI, proof of this is the realistic special effects in Hollywood blockbusters. However, being able to calculate the shape of objects and their volume, AI does not understand the functionality of the form. But many seemingly simple household items are far from simple from a geometric point of view. For example, a vegetable grid whose shape changes depending on what is put in it presents a huge geometric challenge: the AI ​​has to keep in mind that the grid doesn’t have one fixed shape, that you can put potatoes in it, but you can’t put peas and etc. This knowledge has not yet been achieved, and without it, the robot is useless both in everyday life and in production.

An obvious, but not yet very justified way to combine the three fundamental categories of knowledge is computer modeling. Programs used in video games such as Grand Theft Auto simulate the interactions between machines, people, and other objects in the game world. The simulation takes into account the shape, weight and other characteristics of the object at the initial moment, and then uses the knowledge of physics to predict the way the object will move. Scientists use simulations to represent complex processes such as the evolution of galaxies or the movement of blood cells. However, modeling requires too much computer power, because the world is extremely complex and requires careful calculation of many indicators. Reality simulators can only capture a fraction of what a robot would encounter in real life.

Reboot AI. Building artificial intelligence

Teaching computers inevitably implies that they need to learn on their own: it’s not realistic to manually code everything that machines need to know. We need to find a compromise between AI’s ability to stand alone, for example, to tag millions of photos of dogs, and a way to convey to it how certain breeds of dogs behave, which ones are potentially dangerous, and in what conditions.

Deep learning is not what AI is doing today. In fact, it is the ability to learn in open systems such as a street or a room with their complex Spatio-temporal and causal relationships.


How not to become a hostage of AI?

Due to the limitations of deep learning, too many AI solutions have so far turned out to be short-term. In this area, there are almost no specific engineering standards that are taken for granted in other industries like engineering. Such negligence looks harmless only as long as the stakes are low. It’s okay if auto-targeting people in a photo is only reliable 90% of the time when it comes to Instagram, but what will happen if the police start using these programs? Google search doesn’t need crash tests, but self-driving cars do. However, the vulnerability of electronic systems is already obvious, especially when it comes to the Internet of things or GPS, which is easily amenable to hacker attacks.

Engineering standards, in turn, require adequate ways to measure progress in the field of AI. The most famous indicator here is the Turing test (intelligence will become intelligence in the full sense when talking with a person, it will make him think that he is talking not with a machine, but with another person). However, in 2014, chatbot Eugene Goostman fooled the jury of the Turing Test contest by imitating a 13-year-old teenager from Odessa who allegedly did not know the answers to some questions or simply avoided answers. The Turing test does not meet the main goal in the development of AI: it is not that the computer can fool people as cunningly as possible, but that it can navigate the world, reason flexibly and bring maximum benefit to people.

But what if AI still decides to cheat and circle a person around a finger? The laws of robotics invented by science fiction writer Isaac Asimov are often remembered 4 . Like, as soon as we begin to understand that AI is turning into a potential threat, we need to immediately impress these three laws on it, and the trick is in the bag.

But not everything is simple here either. First of all, as already mentioned, we do not speak the language of AI: deep learning is fundamentally different from classical programming, neural networks are a black box, and researchers do not always understand what and how is happening in its depths. It is more appropriate to compare neural network error correction not with correcting an erroneous code, but rather with correcting the side symptoms of a new drug, to which the body can react in any way. Also, breaking the first law seems to be unavoidable in many situations (what if a self-driving car is driving toward an unmanned school bus full of children?). Or will the robot get stuck at the wrong moment with a moral dilemma that is not really worth a damn (wondering, for example, whether it is worth rescuing people from a burning building because of the potential harm that the children of the residents may someday cause to other people)? Is it possible, after all, to create a superintelligent intellect, completely devoid of both the notorious common sense and moral values? Scientists have no answer to this question.

AI is unique in that it has the potential to reduce its own risks: knives cannot reason about the consequences of their actions, but AI will someday. The real danger, however, is not that AI will someday become radically smarter and take over the world, but that today we rely entirely on undeveloped technological solutions that are far from genuine intelligence. We’re moving too fast down the wrong path. Our AI is a teenager who is not aware of his own powers and is unable to analyze the consequences of his actions.

The only way out is to leave the path of deep learning of neural networks, subordinating them to blind statistics, and take on the creation of machines controlled on the basis of complex cognitive models that have an understanding of space, time, and causality. Only in this case, we can count on a comfortable future. Robots equipped with such software will be able to move safely for people and themselves on land and air, manipulate a variety of objects, and comfortably interact with the environment. Search engines will be able to answer any, even the most sophisticated question. Over time, machines that have achieved an everyday understanding of the world will be able to correspond to an even more complex idea of ​​reality, characteristic of experts in various scientific fields – and robot doctors and robot lawyers will appear. These machines will indeed be able to educate themselves in the sense we are used to. All this will change life as much as the Internet has changed it – and maybe even more.

We cannot say with certainty when and how exactly this will happen: so the ancient Greeks, having first become acquainted with electricity, could not even dream of the Internet. But we know that it will happen, the AI ​​will inevitably get smarter – and it is better to try to make it pass as safely as possible for us.

Top 10 Thoughts

1. Deep learning is opaque, limited, and inefficient if it works with a minimum of data and does not know how to link them into a coherent picture. Deep does not mean smart, it’s just about the number of neural layers in the network, and not about the completeness of understanding the world.

2. All technological progress is aimed at creating relatively unintelligent machines that perform rather narrow tasks and rely on blind correlations of data. Good for Facebook and Google, bad for all of us.

3. Do not overestimate the supercomputer that won first in chess, and then in go. In closed systems with certain, strictly established rules, AI is really effective. But the whole world around us is an open unpredictable system.

4. We are weaker than a computer because we get tired, distracted, and subject to emotions. But we are immeasurably stronger than a computer because we have a coherent view of reality and are able to relearn even with a minimum of available information.

5. Two key signs that would testify to the triumph of AI are reading and the ability of robots to replace humans in various areas of life. But while the computer does not understand the meaning of elementary texts, the highest of the available achievements of robotics is a robot vacuum cleaner.

6. Let the machines learn by themselves? Manually coding everything machines need to know? Neither one nor the other: you need to look for a compromise, but you can not find it in the deep learning methodology practiced today.

7. There is no one universal way to adapt common sense for a computer, but several can be used at once, and all of them lie in the realm of classical AI, to which it is not too late to return.

8. The Turing test and Asimov’s laws of robotics are only good in theory.

9. The real danger is not that AI will take over the world tomorrow, but that already today we rely entirely on undeveloped technological solutions that are far from genuine intelligence.

10. AI will inevitably get smarter, but whether it will be the mind of an immature teenager (today’s version) or conscious intelligence is up to us.

1.  The principle of operation of a multilayer neural network is as follows: it starts with raw data loaded into it and gradually, layer by layer, forms more and more complex images from them. So, when recognizing images, pixels are used as raw data for the first layer. Neurons in the next layer combine them to reveal basic image parameters like strokes and orientation. The next layer of neurons combines longer lines, angles, etc. Subsequent layers reveal more and more complex shapes – ovals, squares until finally the objects that need to be recognized are added: a face or handwriting.

2.  Read the summary of futurist and longevity specialist Ray Kurzweil’s Transcend. Nine Steps on the Path to Eternal Life .”

3.  Taxonomy is the science of the principles and practice of classification and systematization. 

4.  Mandatory rules of conduct for robots were formulated by the writer back in 1942. The first law: “A robot cannot harm a person or, by its inaction, allow a person to be harmed.” Second Law: “A robot must obey all orders given by a human unless those orders are contrary to the First Law.” Third Law: “A robot must take care of its own safety to the extent that this does not contradict the First or Second Laws.”

next post

85 / 100

Leave a Reply

Your email address will not be published.

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top