The Future Lab AI4EO (Artificial Intelligence for Earth Observation) consolidates the pole position of Germany in AI4EO. The Lab serves multiple science fields: reasoning, uncertainties, explainable AI, physics-informed machine learning, complex structures, and more. It brings 20 renowned international organizations across 9 countries and 27 highly ranked scientists at all levels together to address fundamental challenges in Earth observation specific cutting-edge artificial intelligence research. The research carried out in the Future Lab AI4EO will not only advance Earth observation science but also make key contributions to the interpretability of AI, its ethical implications, and AI4EO technology transfer; the field of applications is also highly relevant to society. The Munich metropolitan region is one of the top places worldwide for AI education and research. The Lab itself is physically located at the new campus of TUM in Taufkirchen/Ottobrunn, where TUM is currently establishing its new Department of Aerospace and Geodesy as part of the Bavarian space initiative. The future lab AI4EO is also associated with the Munich Data Science Institute.
The national excellence center “Machine Learning for Earth Observation” (ML4Earth) will conduct own research at the highest international level by tackling fundamental methodical challenges in AI4EO and their application to the European mission of a Digital Twin Earth. ML research directions will include physics-aware machine learning, reasoning, uncertainty estimation, Explainable AI, Sparse Labels and Transferability, as well as Deep Learning for Complex Data Structures. By investing significant effort in advancing the community’s knowledge in these fields, we create direct impact in various application fields and are shaping our future globally. We will be able to assess the uncertainties of future sea level rise, make it more transparent and explainable how AI algorithms capture the increasing threat to global forests due to the increasing temperatures, quantify the inherent phenomenon of rapid permafrost thawing, use physical hydrological models paired with AI data science to predict Europe’s future ability to store water in cities given the increasing threat of extreme weather, map out physical and chemical soil parameters at large scale based on sparse knowledge on the soil parameters of quite localized areas, or apply deep learning to comprehend the complexity behind climate tipping points.
So2Sat is an ambitious European Research Council (ERC) starting grand project. In the project, we will use revolutionary mapmaking methods to investigate how human settlements grow. We have the previlege to access the data supplied by several German and European earth observation satellites, which are equipped with innovative sensor technologies. We will develop new algorithms for the derivation of geo-information from these measurements. This makes it possible to create high-resolution 3D/4D maps of the cities up to individual building. For the first time, this information will also be combined with data from social networks: crowdsourcing platforms such as OpenStreetMap providing up-to-date map material; photos posted to the network providing authentic and current images in which buildings can be seen or which for example reveal the extent of damage caused by a flood. The major challenge here is consolidating this information and evaluating it automatically in a global scale.
With the launch of the Sentinel satellite missions, the European Copernicus program has made freely available, an unprecedented volume of Earth observation data. These data provide information of the covered area in different spectral bands and at different times. The former is used to identify vegetated areas via the characteristic reflectivity of plants, the latter allows researchers to observe changes over time – a valuable source of information in times of rapid climate change and its consequences for food security. However, due to the complex nature of Earth observation data, current research is still only beginning to understand the opportunities created by analyzing the huge volume of data. This project is pushing forward the development of Artificial Intelligence methods that are inevitable in that regard. With iMonitor, we are focusing on algorithms that utilize the vast information behind multi-temporal data. The goal is to detect changes especially in agricultural areas at large spatial scales, unimpeded by cultural and geospatial differences of the land surface, only based on a few localized reference areas with well-known characteristics. Ultimately, we will be able to provide an important data source to quantify the fragility of the food supply dynamics and the impact of single-crop practices during increased pressure on soil, water availability, and drought.
OSMSim: OpenStreetMap Boosting using Simulation-Based Remote Sensing Data Fusion
The main subject of the scientific investigations in this project is the improvement of building information (geometry, attributes) in OpenStreetMap (OSM) using a simulation-based fusion of heterogeneous remote sensing data and to use the updated OSM data for follow-up applications. The working basis is the simulation environment SimGeoI, which enables the modeling of imaging processes of different sensors by exploiting the metadata of remote sensing acquisitions as well as available geometric prior knowledge of the scene under investigation. SimGeoI allows not only a coarse semantic interpretation of the scene, but also an object-related alignment of corresponding scene elements. In this project, SimGeoI is used to compare geometric OSM information with remote sensing data produced under different sensor configurations and at different acquisition times, and to enrich OSM with geometric corrections (position, height) and attributes (e.g. building type, roof structure) gained from a fusion of the different remote sensing data. The scientific investigations of this project aim at three core themes: First, a methodical framework will be developed, which allows the geometric correction of OpenStreetMap data based on the prediction and comparison of building shapes, using a pair of remote sensing images (optical, SAR or mixed). In a second step, geometrically improved OSM information will be used to extract building-related attributes from multi-modal remote sensing data. Finally, the transferability of the developed methods will be experimentally analyzed and interfaces to follow-up applications (Open Event Mapping, Climate Event Portal, Virtual Reality) will be investigated. The methodology will be accompanied by validation in order to evaluate the positional, thematic and temporal accuracy of derived results.
Extreme rainfall events pose a common problem that municipalities need to deal with. Due to the increased probability of extreme weather events due to Climate Change, however, immediate action is key to prevent massive damage or to mitigate risks. We respond to this challenge by facilitating cutting-edge data science to manage the risks going along with heavy rainfall events in a holistic fashion. We develop and train machine learning algorithms to process precipitation data in combination with remote-sensing information on the topography and urban morphology, to quantify very generally the effectiveness of drainage and water retention in the urban environment.
HGF W3: Data Science in Earth Observation – Big Data Fusion for Urban Research
By 2050, around three quarters of the world’s population will live in cities. The new dimension of ongoing global urbanization poses fundamental challenges to our societies across the globe. Despite of increasing efforts, global urban mapping still drags behind the geometric, thematic and temporal resolutions of geo-information that would be needed to address these challenges.
Recently, big Earth observation data amounts in the order of tens of Petabytes (PBs) from complementary data sources have become available. For example, Earth observation (EO) satellites of space agencies reliably provide geodetically accurate large scale geo-information of worldwide cities on a routine basis from space. But the data availability is limited in resolution and viewing geometry. On the other hand, constellations of small and less expensive satellites owned by commercial players, like “Planet”, have been providing images for global coverage on a daily basis since 2017, yet with reduced geometric and radiometric accuracy.
As complementary sources of geo-information massive imagery, text messages and GIS data from open platforms and social media form a temporally quasi-seamless, spatially multi-perspective stream, but with unknown and diverse quality.
This project aims at jointly exploiting big data from social media and satellite observations for urban science.
Deep Transfer Learning in Remote Sensing
This project addresses knowledge transfer in Remote Sensing (RS) from an annotation-rich source domain to an annotation-scarce target domain by reducing their semantic discrepancy (domain shift), helping the latter and its follow-up applications without the need of numerous manually-labeled data. Specifically, the goal of the project is to design a universal deep Transfer Learning (TL) framework for RS data, named as deep RS-TL framework. Within this framework, on the one hand, we will develop several core deep TL algorithms to tackle several fundamental challenges of transferring knowledge in remote sensing, including source-target alignment, multi-temporal adaptation, multi-source adaptation, multi-scale adaptation, spatial-spectral adaptation, cross-task TL, cross-modality TL between source and target domain. On the other hand, we will construct an intelligent RS imagery annotation software which integrates all developed algorithms, to achieve flexible, personal and intelligent annotation for more efficient RS label collection. As a result, the anticipated deep RS-TL framework will considerably facilitate the practical applications of machine learning in remote sensing by relaxing its heavy dependence on laboriously labeled data.
The Sentinel 1 and Sentinel 2 missions outperform comparable satellite missions; the Sentinel satellites are designed to observe the entire Earth’s surface at higher cadences, with higher spatial resolution, and by adding sensors scanning at radar wavelengths. These advantages also come with challenges. First, the figure shows how fundamentally different the information content is for images of both missions, which observe the same region of Munich at radar and optical wavelengths, respectively. On top, all mentioned advantages of the Sentinel missions result in a vastly increased total data volume. The community therefore had to investigate innovative approaches to effectively process these data for large-scale monitoring projects. We have responded to this challenge with the project AI4Sentinels. It turns out that Artificial Intelligence (AI) techniques help to process these diverse data and exploit maximal information out of the imagery.
The main subject of the scientific investigations within this project is the reconstruction of urban scenes by a stereogrammetric fusion of high-resolution spaceborne optical and SAR image data. The goal of this kind of sensor data fusion is to get a comprehensive three- dimensional description of urban topography. There are several reasons for this fusion: On the one hand, particularly spaceborne optical imagery are widely available and stored in great amounts in international Earth observation archives. On the other hand, also recent radar remote sensing missions, such as TerraSAR-X/TanDEM- X or CosmoSkymed, lead to a growing availability of high-resolution SAR data. Making use of the possibility to acquire new SAR data independently from daylight or weather conditions, this project wants to support the request to exploit existing archive data of regions of interest with greater flexibility but also to enable a timely mapping of critical areas by optimum combination of arbitrary satellite image data that is available on short notice. The results of this project will help to make the analysis of heterogeneous satellite image data as flexible as possible, in particular with respect to a rapid 3D mapping in time- critical applications.
The increased availability of data from different satellite and airborne sensors for a particular scene makes it desirable to jointly use data from multiple data sources for improved information extraction, hazard monitoring, and land cover/land use mapping. In this context, hyperspectral sensors provide detailed spectral information, which can be used to discriminate different classes of interest, but they do not provide structural and elevation information. On the other hand, LiDAR data can extract useful information related to the size, structure, and elevation of different objects, but cannot model the spectral characteristics of different materials. The main objective of this project goes to the proposition of efficient approaches for the integration of LiDAR and hyperspectral data.
The objective of the project is to develop a methodological framework for generalized coupled spectral unmixing to simultaneously unmix multisensor and multitemporal spectral images. The framework is applied to time series analysis and resolution enhancement of hyperspectral imagery. The outcome of the project will promote synergy and fusion of spaceborne hyperspectral and multispectral data (e.g., EnMAP and Sentinel-2).
Satellite remote sensing enables us to recover contact-free large-scale information about the physical properties of our Earth system from space. For information retrieval from these massive Earth observation data, efficient computing is necessary. To develop faster algorithms, especially for those large-scale problems that arise in Earth observation, it is thus inevitable to consider parallel computing. To this end, an interdisciplinary approach including optimal information retrieval and computationally efficient, parallelized solvers for large-scale problems seems the optimal solution, which is the focus of this project. Figure by courtesy of LRZ.
SiPEO develops explorative algorithms to improve information retrieval from remote sensing data, in particular those from current and the next generation of Earth observation missions. Currently, the team is working on the following main areas: 1) sparse Earth observation; 2) non-local filtering concept; 3) robust estimation. The improved retrieval of geo-information from EO data can be used to better support cartographic applications, resource management, civil security, disaster management, planning and decision making.
This projects exploits the sparsity in remote sensing data, e.g. SAR signal in the elevation direction, and LiDAR full waveform. The project will identify sparse signals in remote sensing data, proide forward modeling and inversion techniques. Fast parallel sparse reconstruction solvers tailored to our problems will also be developed.
The research envisioned in this project leads to a new kind of city models for monitoring and visualization of the dynamics of urban infrastructure in a very high level of detail. The change or deformation of different parts of individual buildings will be accessible for different types of users (geologists, civil engineers, decision makers, etc.) to support city monitoring and management as well as risk assessment.
The objective of the project is to develop Compressive Sensing reconstruction algorithms for terahertz (THz) body scanners in order to improve the image quality of these scanners. Our project part focuses in particular on the development of joint 3D reconstruction techniques for FMCW THz radar imaging, that combine THz imaging in x,y and FMCW radar in the z direction. Using regularizers and exploiting the sparse properties of the signal during reconstruction, we attempt to improve quality and acquisition speed. Figure by courtesy of Sven Augustin.
This project is a follow-up of the project "4D City". It attempts the first reconstruction of objects from 3-D tomographic SAR point clouds, with the vision of building dynamic city models that could potentially be used to monitor and visualize the dynamics of urban infrastructure in very high level of details. The basic idea is to reconstruct 3-D building models via independent modeling of each individual façade to build the overall 2-D shape of the building footprint followed by its representation in 3-D.
This project makes use of the special configurations of the TanDEM-X Science Phase for precise 3D point localization and coastline detection. A joint feature of the investigated applications is the exploitation of large spatial and temporal baselines, which are available in Pursuit Monostatic Mode during the Science Phase. In this phase also the relatively new, high resolution Staring Spotlight Mode will be available for the first time in a single-pass interferometric configuration.