AI4TWINNING

Funded by TUM GNI

Project Leader
Prof. Dr.-Ing. Xiaoxiang Zhu

Project Scientists:
Zhaiyu Chen, Dr. Yao Sun

Cooperation Partners
TUM Chair of Computational Modeling and Simulation (Prof. Dr.-Ing. André Borrmann)
TUM Chair of Computer Vision and Artificial Intelligence (Prof. Dr. rer. nat. Daniel Cremers)
TUM Chair of Geoinformatics (Prof. Dr. rer. nat. Thomas H. Kolbe)
TUM Chair of Architectural Informatics (Prof. Dr.-Ing. Frank Petzold)
TUM Photogrammetry and Remote Sensing (Prof. Dr.-Ing. Uwe Stilla)

 

Runtime
2021 – 2025

Recently big Earth observation data amounts in the order of tens of Petabytes from complementary data sources have become available. For example, Earth observation (EO) satellites reliably provide large scale geo-information of worldwide cities on a routine basis from space. However, the data availability is limited in resolution and viewing geometry. On the other hand, closer-range Earth observation platforms such as aircrafts offer more flexibility on capturing Earth features on demand. This sub-project is to develop new data science and machine learning approaches tailored to an optimal exploitation of big geospatial data as provided by abovementioned Earth observation platforms in order to provide invaluable information for a better understanding of the built world. In particular, we focus on the 3D reconstruction of the built environment on an individual building level. This research landscape comprises both a large scale reconstruction of built facilities, which aims at a comprehensive, large scale 3D mapping from monocular remote sensing imagery that is complementary to those derived from camera streams (Project Cremers), as well as a local perspective, which aims at a more detailed view of selected points of interest in very high resolution, which can serve as basis for thermal mapping (Project Stilla), semantic understanding (Project Kolbe & Petzold) and BIMs (Project Borrmann). From the AI methodology perspective, while the large-scale stream will put the focus on the robustness and transferability of the deep learning/machine learning models to be developed, for the very high-resolution stream we will particularly research on the fusion of multisensory data, as well as hybrid models combing model-based signal processing methods and data-driven machine learning models for an improved information retrieval. With the experience gained in this sub-project, we will lay the foundation for future Earth observation that will be characterized by an ever-improved trade-off between high coverage and simultaneously high spatial and temporal resolutions, finally leading to the capability of using AI and Earth observation to provide a multi-scale 3D view of our built environment.