Article

Blog List

Can artificial intelligence understand you?

07 March 2020

In response to the latest progress in artificial intelligence, some critics believe that despite the rapid progress of artificial intelligence, it still has not reached a real, accurate, real understanding. These words show that understanding is dual: a system either has real understanding, or it does not.

The problem with this perception is that human understanding is always incomplete and imperfect. I think understanding is a dynamic capacity growth interval. Take water as an example. Most people understand the properties of water: wet, drinkable, plant needs, frozen to ice, and more.

However, many people don't know that water is also an electrical conductor, so don't blow your head in the shower. However, we do not think that these people lack a real, accurate, or real understanding of water. Instead, we say that their understanding is incomplete.

Can artificial intelligence understand you?

Misunderstand by AI

Therefore, we should evaluate artificial intelligence systems in the same manner. The existing system generates some understanding, for example: if I tell Siri: Siri, call Carol. It dials the correct number, does Siri not understand my request? That was very wrong.

If I ask Google: Who was defeated by IBM's dark blue system? It will display a message box with the big letter Kasparov and it understands my question correctly. Of course, this understanding is limited. If I ask again, time?, It will give the definition of time in the dictionary, and it cannot answer in context.

The controversy over understanding dates back to the Aristotle period, and perhaps Searle's Chinese Room argument is clearer. I recommend reading Cole's article in the Stanford Encyclopedia of Philosophy.

Understanding characteristics

I think understanding is a form of functionalism. We describe the characteristics of understanding from a functional perspective and evaluate their respective contributions based on the role of various internal structures in the brain or artificial intelligence system.

From a software engineering perspective, functionalism encourages us to design a range of test and measurement system functions. We can ask the system (or people): What happens if the temperature of the water drops to minus 20 ° C? Or What if the chance of using a hair dryer in the shower? Say the system can understand, and if the feedback is wrong, we judge the system can't understand.

In order for the system to have comprehension, we need to establish a systemic connection between different concepts, states, and operations. Today's language translation systems correctly associate water in English with agua in Spanish, but they do not establish any connection between water and electric shock.

Can artificial intelligence understand you?

Critics of new advances in artificial intelligence come from two main sources. First, the hype surrounding artificial intelligence (from researchers, their organizations, and even governments and funding agencies) has gone all out. It even sparked fears of super intelligence and robot apocalypse. To refute such nonsense, rational criticism is still advisable.

The debate about the future research direction of artificial intelligence and the allocation of government funds is continuing, and criticism is part of this debate. One side is the advocate of connectionism, who promoted deep learning and supported the continued research on artificial intelligence. The other side is an advocate of artificial intelligence symbols and control methods (such as formal logic). More and more people are advocating combining these two approaches in a hybrid architecture.

Criticism is also helpful in this debate, because artificial intelligence will continue to break our assumptions and choose how to use social resources and money to advance the science and technology of artificial intelligence. However, I object to the notion that deep learning systems do not produce a true understanding and should therefore be abandoned.

This view and deep learning systems have made great progress, and further research will give us an unobstructed view of intelligence. I like Lakatos' view that research projects will continue until they no longer produce results.

I think we should continue to pursue connectionism, symbolic representationalist, and emerging hybrid solutions, because they have much to do in the future.

New directions - Deep learning

Criticism of deep learning has given birth to new directions. In particular, the performance of deep learning systems on various benchmark tasks is comparable to that of humans, but it cannot sum up tasks that are very similar on the surface, which has created a crisis in machine learning. Researchers are thinking about new coping strategies, such as invariants, studying causal models, which apply to both symbolic learning and connectionism.

I think we should pursue the advancement of artificial intelligence science and technology, rather than talking about the definition of real understanding. Instead, the focus should be on the system functions that can be achieved in the next 5 years, 10 years, or 50 years. We can perform tests on artificial intelligence systems to define these capabilities to measure whether the system has these capabilities.

Can artificial intelligence understand you?

To do this, these capabilities must be executable. In short, the test of artificial intelligence should be test-oriented. This requires us to translate the vague concepts of understanding and intelligence into concrete, measurable capabilities. This in itself is a very practical process.

In the operation test, in addition to evaluating the input and output behavior of the artificial intelligence system, the internal structure (data structure, knowledge base, etc.) that generates this behavior can also be checked. A big advantage of artificial intelligence over neuroscience is that we can more easily experiment with artificial intelligence systems, understand and evaluate their behavior.

Focus on behavioral capabilities

However, we still remind everyone to pay attention. Connectionist approaches, including deep learning, often lead to internal structures that are difficult to explain, and it seems that this is also the case with our brains. Therefore, we do not need to set a research goal to confirm the existence of certain structures (such as symbolic representations). Instead, we should focus on the behavioral capabilities we want the system to acquire, and study how their internal mechanisms achieve these capabilities.

For example, a successful conversation requires each participant to pay attention to the causes and consequences of the communication process. There are many ways to do this, and we don't necessarily require explicit historical memory in a deep learning system.

Instead, we wrote a specific internal structure, which does not mean that it will behave as we expected. Drew McDermott discusses this issue in detail in his book Artificial Intelligence Meets Natural Stupidity.

The model of artificial intelligence being criticized, developed, and criticized again leads to the so-called artificial intelligence effect: the most advanced systems today do not produce real understanding or real intelligence, and artificial intelligence is equivalent to failure. This has led to the neglect of success in artificial intelligence and reduced investment. For example, there was a time when people thought that the system would be intelligent if it could reach the human ability to play chess or go.

Can artificial intelligence understand you?

But when Deep Blue defeated Kasparov in 1997, a famous artificial intelligence researcher believed that it was easy to defeat humans by playing chess. To show real intelligence, we must solve the truck backer-upper problem ) Is about to dump an articulated semi-trailer truck into the parking space.

History repeats itself

In fact, nine years ago, Nguyen and Widrow had solved this problem through reinforcement learning. Today, many thinking critics have once again come up with new tasks and conditions to test whether a system produces understanding.

At the same time, the research and development of artificial intelligence improves system capabilities and creates value for society. It is important that, whether from the perspective of academic integrity or to continue to obtain investment, artificial intelligence researchers must be able to accept praise when they succeed, and have the courage to take responsibility when they fail.

We must suppress the hype surrounding the new advances in artificial intelligence, we must objectively measure whether our system can understand users and their goals under different circumstances, and objectively understand the vast world that artificial intelligence has pioneered. We don't have to worry about the true or false results, and bring honest and effective self-criticism. Let us move on.