WEB MAGAZINE
menu
logo_UTokyo
logo_UTokyo

TAGS

Frontiers of Science

Creating the digital twin of Earth

YOKOYA Naoto

Associate Professor, Graduate School of Frontier Sciences, the University of Tokyo

June 3, 2024

research01

Preserving the planet for the next generation

A digital twin is a precise copy existing in virtual space of an object in the real world. For example, if a model as detailed as a real jet engine could be reproduced in a virtual space, then it would become possible to create computer simulations indistinguishable from the real thing. By applying operational data from the real world to this virtual engine, we could quickly find the cause of engine failure when it occurs, or perhaps even predict engine failure in advance. In short, recreating a twin in the virtual world can help to understand the real one more deeply and find ways to solve problems. That is the concept of the digital twin.

What if a digital twin of Earth existed? What if a twin that is exactly like the real Earth could be recreated in virtual space? And what if changes in weather, disaster conditions, or urban activities could be updated in real time? Herein lie Associate Professor Yokoya’s goals and dreams.

“There are various global-scale problems related to natural disasters, climate change, agriculture, and many others. It is very difficult to solve these problems relying on human capacities alone. Wouldn't it be wonderful if there was an AI that could perceive the entire planet by analyzing observational data, including light at wavelengths invisible to human eyes, and help us solve problems? I want a venture that could lead to such technology.”

In other words, creating a digital twin, an intelligent information processing system that can not only deepen humanity’s understanding of the real world but assist in preserving it.

“It is widely known that we do not have unlimited sources of energy. Ever since I was a small child, I have felt that the way humanity is devouring resources has to stop somewhere. We have a limited amount of time left to respond to the environmental challenges facing us. How can we live sustainably to preserve our planet for the next generation? While it is interesting to talk about moving to another planet, I don't think there is any alternative to Earth. Therefore, I think we need to create technologies that can tell us what we should do for Earth now. It was the awareness of such issues that led me to this line of research, even as a student.”

Integrating remote sensing and AI

Yokoya’s research is in a new field that has emerged at the boundary between AI and remote sensing. Actually, Yokoya majored in aerospace engineering at the School of Engineering and started out as a remote sensing scientist.

“Everyone around me was researching airplanes, rocket engines or working on launching small satellites; those were the mainstream fields. However, I wanted to contribute to solving global issues such as climate change. So, when I learned about the research field of earth observation and remote sensing technology, my interests shifted away from rockets. Rather than launching satellites, I was interested in how to utilize them after launch.”

Remote sensing is a technology for observing the shape and properties of Earth's surface using cameras and radars mounted on satellites and airplanes.

“One of the challenges of remote sensing is resolution, or how much detail can be seen.

That was the beginning of my research: using digital image processing to improve the spatial resolution of images, a technology called “super-resolution imaging.” Various sensors onboard satellites can provide large amounts of data, but each sensor has its own limitations. For example, some sensors have low resolution but can capture many colors, while others have high resolution but can only capture grayscale images. However, if you can exploit the strong points of each sensor and combine data skillfully, you can get images with both high spatial and spectral resolution. That is how I started doing research, using computational methods to reconstruct such images.”

However, technology alone is not enough to acquire good images. Yokoya has realized that acquiring good images calls for an understanding beyond the technology itself, such as what kind of information is being sought or how the images would be used.

“How can we extract meaningful, three-dimensional information including altitude from images obtained by remote sensing? How can we automate the process and develop technology that can work anywhere in the world? Gradually, my research shifted in this direction, towards dynamic information processing on a broad scale. For example, real-time monitoring of disasters over wide ranges, environmental assessment based on the amount of forest biomass, or mapping what and how crops are being grown.”

His research experience at the German Aerospace Center was a turning point in this direction.

“I worked in Germany for two years because they wanted to use the research I had done as a doctoral student on a Japanese satellite project for a similar project in Germany. The research there was on using artificial intelligence to automate processing of Earth observation data, close to my current research topic.”

Advanced image information processing with deep learning

The use of machine learning for remote sensing data processing itself has been around since the late 1980s, and many researchers have been using AI as a powerful tool since then. Yokoya says that the rise of deep learning using neural networks in the 2010s combined with improvements in computer performance enabled more advanced information processing and accelerated the pace of research itself as well.

“There is a famous American earth observation satellite called Landsat, which has an image resolution of 30 meters. Even so, each image contains more than 10 dimensions of spectral information. Already existing machine learning is sufficient to process this amount of data. However, a deep learning-based approach becomes necessary with higher-resolution and more complex data. This could mean very high-dimensional or daily time-series data.”

But what does it mean to use AI to extract information from satellite images?

“For example, a color image contains only three numerical values in the three channels of RGB (the three primary colors of light: red, green, and blue). Multispectral images are commonly used for Earth observation and contain a lot of information other than visible light. They also contain values in the near-infrared and short-wave infrared regions, which are wavelengths invisible to humans. Furthermore, images taken by a hyperspectral camera, which has a spectral resolution of 100 or 200 channels, contain a large amount of very complex information in just one pixel. In supervised machine learning, humans act as teachers and give the computer many concrete examples of input and output to learn how to interpret this kind of image data. For example, if the computer is asked to judge whether a pixel is a forest or not, it is taught through specific examples that green usually has a higher value than blue when it is a forest, and that the reflectance in the near-infrared range is very high.”

Yokoya says that deep learning has made it possible to solve many problems if one can prepare large amounts of optimal training data. However, there are some things for which it is not possible to collect such data, one example being the height of buildings. Measuring the height of each building is very expensive, and the amount of data is enormous. There are many types of data that cannot be easily collected in this way. What is one to do?

“Recently, we have been working on a method that uses mathematical models and computations based on physical phenomena, in other words, a method that uses numerical simulations and computer graphics. For example, let us suppose a disaster occurs and causes a landslide. In this case, we can make computations and simulations based our knowledge of physics, and we can estimate how the topography would change and what it might look like in an optical image.”

By simulating many landslides in a computer, the researchers can synthesize pairs of satellite images and topographic changes. Then a deep learning model can be trained to determine topographic changes from satellite images before and after the disaster. This approach can be applied even when the types of pre- and post-disaster satellite images are different. For example, if there are optical images of an area before the disaster, but only radar images are available after the disaster due to bad weather or nighttime, this approach has the potential to provide a detailed picture of the disaster, nonetheless.

Machine learning models that work anywhere in the world

Currently, Yokoya’s sole focus is on developing “Multi-Dimensional Ultra-High Resolution Earth Observation Intelligence,” which has been selected as a project in “Fusion Oriented Research for disruptive Science and Technology” by the Japanese government. The name of the project does not roll off the tongue easily, but it is a big step toward creating a digital twin of Earth.

“It is a seven year long project. The goal for the next four years (time of the interview: 2024) is to develop a machine learning model that can automatically generate a map with very high-resolution semantic information of objects and their locations, including three-dimensional information, from a single high-resolution image, no matter where it is on the planet.”

If this could be done simultaneously on the entire planet, the result would be truly a digital twin of Earth. Of course, that is the real goal. Still, a machine-learning model that works in any region of the globe would be still a tremendous achievement.

“I believe that everyone in the world should have the tools to know and understand the place, region, or country where they live, without being dependent on a single company's commercial app. The "OpenEarthMap" that we are developing as part of this seven-year project is an attempt to provide such an open tool for everyone to use.”

Yokoya puts an emphasis on “everyone in the world.” This is because machine learning models trained on data from Europe and the U.S., where there are many satellite images, will perform much worse in regions such as Asia, Africa, and South America.

“I think this is unacceptable. It is crucial that a model works properly anywhere in the world. The concept of this project is to create such a machine learning model, and that is what I want to do. It is also very important for disaster response.”

Why did Yokoya choose aerospace engineering as his major in the first place?

“In fact, during my first and second years of undergraduate studies, I was a member of the university rowing team, and I was absorbed in the world of sports. I am the type of person that once started something needs to see things through. I was even staying overnight to train as much as I could (laughs). So, I didn't have time to think about the future, and before I knew it, it was time to declare a major (for the third and fourth years of undergraduate studies). I thought about what I wanted to be when I was little and remembered that I wanted to be an astronaut. As a child, I was very interested in what the world and the frontiers of the universe were like. So, I chose aerospace engineering, without any definite reason… I chose it because I was in a rush to declare a major (laughs).”

That said, this might be because Yokoya has been concerned about the environment ever since he was a child. Therefore, he has this message for his juniors.

“Everyone has likes and interests. You will have fun if you follow where they lead. I hope that you will keep your curiosity and pursue what you are passionate about.”

※Year of interview:2024
Interview/Text: OTA Minoru
Photography: KAIZUKA Junichi

YOKOYA Naoto
Associate Professor, Graduate School of Frontier Sciences, the University of Tokyo
2015-2017, Humboldt Research Fellow, German Aerospace Center and Technical University of Munich. 2018, Unit Leader, RIKEN Center for Advanced Intelligence Project (AIP). 2020, Lecturer, Graduate School of Frontier Sciences, the University of Tokyo.
TAGS

image01

In a universe of numbers called "groups"

October, 1 2024

image01

What is information for living organisms?

August 1, 2024

image01

Creating a miniature neutron star on Earth

July 1, 2024