How do Humans Understand the Meanings of Words?
How do Humans Understand the Meanings of Words?
Ask your smartphone, “Where’s the University of Tokyo?” and it will tell you the address. Compose an email using voice input instead of the keyboard. An application that automatically translates French newspaper articles on the Web into Japanese in an instant also works in the other direction. What makes all this possible is Natural Language Processing technology, the area of specialization of Ms. Hitomi Yanaka, a lecturer with her own laboratory, who has been awarded the title of Excellent Young Researcher.
Ms. Yanaka describes her dream as follows. “My goal is to combine logic and linguistics approaches to create more reliable language processing technology that can understand language like a human being. I hope to see the day when we can have a natural conversation with artificial intelligence (AI) as if we were talking to a real person.” However, the road is steep and the summit to be climbed is challenging. Most research in this space is currently focused on processing natural language using machine learning, and Ms. Yanaka’s attempt to achieve this by “combining logic and linguistics approaches” is an ambitious project that only a few researchers are working on.
Natural language consists of the words and sentences that we humans normally use in our daily conversations, reading, and other forms of expression. I asked Ms. Yanaka to briefly sum up Natural Language Processing in a few words. “In a few words? That’ll be tough. There are so many different fields of research on natural language, including our own information science, as well as linguistics in the humanities stream. Cognitive science studies how people acquire language, and there is also a philosophical approach that considers language from the concept of what it is and what it means. However, we still don’t know a lot about how humans can speak and understand language so naturally. I believe that if we can bring together research on language from diverse fields like this, we’ll be able to grasp what a human understanding of language is. Many researchers in the field of Natural Language Processing study how to process words with machines, with efficiency and other factors in mind, but my approach to Natural Language Processing research seeks to incorporate knowledge from various disciplines and elucidate the mechanism by which people understand the meaning of words.”
A Black Box of Deep Learning
When the first boom in Natural Language Processing research occurred in the 1960s, observers were optimistic that automatic translation should be easy if you had a large enough dictionary. But of course, it couldn’t be that simple, and it’s only recently that something practical has finally appeared, with the rapid progress of deep learning coupled with advances in computing.
“There’s a lot of research on the use of deep learning to create language models that are statistically learned from vast amounts of data and apply them to translation and dialogue, and some of these models claim to have achieved the same level of accuracy in understanding language as humans. But with technologies that use deep learning, the process of how that output was obtained from the input is a black box, and we don’t know if they really understand the language like a human,” Ms. Yanaka pointed out.
For a computer to process words, it’s first necessary to convert the words into symbols that the machine can understand, such as numbers. Therefore, techniques using deep learning represent the meanings of words as vectors. So, the similarity between two words A and B is determined by the extent to which their vectors point in the same direction. In this way, “understanding the meaning” is replaced by “calculating in vector space”. This makes it possible to analyze vast numbers of words and sentences statistically and probabilistically through deep learning. By “black box,” Ms. Yanaka refers to this statistical and probabilistic process, which may be practical and useful, but she questions whether it is different from the processes carried out by the human brain.
“In fields other than language processing such as linguistics, a vast amount of knowledge about language has been accumulated, such as the fact that certain sentences are grammatically correct, and others are not. Philosophy includes the concepts that the language we use has productivity, in that we can use any amount we like. Language also has compositionality and systematicity. Systematicity means that people can naturally acquire and use the systems that are hidden behind words; for example, when they first hear the words “Bob loves John,” they can also understand the words “John loves Bob.” One of my research goals is to evaluate whether deep learning understands the meaning and grammar of words using insights from these various fields. I adjust the deep learning parameters in accordance with my hypothesis, and then verify the outcome on computer, but it’s a lot of work (laughs).”
Developing a System with Human-like Reasoning
Her approach seeks to improve deep learning itself, but Ms. Yanaka feels that there are limitations. What she is now pursuing is the idea of adding a completely different method to deep learning. It’s called “Natural Language Inference technology through the fusion of symbolic logic.” It’s a difficult term to understand, but according to Ms. Yanaka, it can be summed up briefly as “human-like inference.”
“We know that deep learning has difficulty with double negatives, triple negatives, or quantitative expressions. Even reputable automatic translation applications built on deep learning technology have trouble translating double-negative sentences. On the other hand, symbolic logic (formal logic) is suitable for dealing with such negations and quantifications. For example, it’s difficult to represent the concept ‘all’ in a vector, but it’s easy in a logical expression. It’s more efficient to teach grammatical rules directly to the model in advance using symbolic logic. If deep learning is an inductive method, this could be called a deductive method. In other words, we believe that combining deep learning with symbolic logic will enable us to achieve a capacity for human-like inference.”
“The idea is to build what you might call a composite system with deep learning. For example, if we transform two statements A and B into logical expressions, and the proof between the logical expressions says, ‘If A, then B is true, and if B, then A is true,’ then the meanings of the statements A and B are equivalent. In this way, we use symbolic logic to determine the similarity between A and B. Meanwhile, deep learning is better at such aspects as the similarity of the tendencies for the words ‘apple’ and ‘tangerine’ to appear in a sentence, so we leave that to machine learning. In other words, by complementing each other’s information, inferences are advanced.”
In fact, it has been proposed that there may be both a formal logic component and a neural network component in the human brain, the so-called hypotheses of decision-making such as System 1 and System 2 thinking in the field of cognitive science. System 1 roughly interrogates the neural network to conclude that ‘this sentence and this set of sentences are candidates for similar sentences,’ and the logical part of System 2 checks the grammatical and logical aspects of the details of these sentences. Ms. Yanaka believes that people’s cognitive mechanisms may also be structured in this way.
Still, what a vast frontier language processing is. Words are what we ourselves speak, hear, write, and read. “If you regard everyday language as a ‘phenomenon,’ there are so many different phenomena: negation, quantifications temporal relations, comparatives and so on. I hope to realize a system that can robustly represent meaning and reason like a person about just one of these linguistic phenomena. Somehow, soon.”
Research and Jazz are Similar
In her master’s program, Ms. Yanaka’s major was applied chemistry, which is totally unrelated to Natural Language Processing. However, at the company where she worked after graduation, she was entrusted with the development and operation of a patent search system. Her decision to return to the doctoral program was prompted by the many customer requests she received at this time.
“Our customers wanted to know whether they could search for patent text more intelligently. Whether it would be possible to search at the free-text level, whether it would be possible to determine whether the meanings of sentences that contain negation expressions are similar to those that do not, or whether they could automatically perform some kind of fact-checking. I think such needs also arise in financial documents, medical electronic health records, and so on, but these kinds of searches often have to be done manually, and not much research had been done into the subject. So, I decided to enter the doctoral program and start my own research.”
“When I began, I realized that research into language processing depends strongly on a fusion of the humanities and sciences. The language used in philosophy is logic, while logic is also the basis of information science. From this aspect, interaction with researchers from different fields has become indispensable for research.”
“I read a lot of papers on philosophy and linguistics, and many of those who are active in discussions are also professors of philosophy and linguistics. I think that both fields are quite profound. Machine learning is technologically amazing: there’s a lot of discussion about what kinds of algorithms are best and how to make learning more efficient, but most research doesn’t go deeply into the meanings of words. On the other hand, linguistics and philosophy involve meticulous analysis of single sentences, and can also give rise to heated debates. Anyway, the more you read, the more you realize you know so little.”
Ms. Yanaka loved her ethics classes in high school. “I’ve been interested in both philosophy and linguistics from that time on, and it’s stayed with me since,” she said. I read Nietzsche, Wittgenstein, and other philosophers, and sometimes I wish I’d pursued them further back then.” She continued. “I want to encourage students that if there’s anything you’re interested in now, go for it wherever possible. As the saying goes, ‘There’s no time like the present,’ and I think it’s important to pursue your interests.”
Ms. Yanaka is also an amateur jazz pianist. “I started in college and was influenced by my father, who loved jazz. My favorite jazz artist is Bill Evans. Research and jazz are similar in my mind. There’s a sense of freedom in both and it’s very important to be interactive with the people around you, so I think they’re similar in that respect. The feeling of working together with a group of people to create a single melody is very similar to my research: the overall topic is fixed, but there’s a certain amount of freedom within that range.”
She says that when she starts something, she has to follow it through to the bitter end, like practicing the piano all day long to copy Bill Evans’ piece ‘Israel’ to perfection. Ms Yanaka says that the most enjoyable time is when she’s creating something, whether it’s programming or music.
So now, Ms. Yanaka, how would you describe your goals as a researcher? “I’ve recently come to the realization that research is something that continues for a lifetime, rather than having to conform to a series of deadlines. There’s a philosophy lecturer who’s still active and healthy at 88 years of age. Many linguists and philosophers are still working no matter how old they get. So, one thing you might want to consider is choosing a path of research that will enable you to have a serene life. Come to think of it, now that you just asked me, I’m wondering whether I should have had more goals. Yes, maybe I should have (laughs).”
※Year of interview:2022
Interview/Text: Minoru Ota
Photography: Junichi Kaizuka