Many applications, such as the interpretation of road spaces for autonomous driving, require the solution of various tasks based on digital images. The information to be extracted typically reflects different aspects of identical objects in the physical world. This knowledge of existing correlations between tasks can be exploited to solve the individual tasks more efficiently and with higher accuracy.
The combination of machine learning and physically based modelling, hybrid modelling, is intended to explore and further develop possible ways of gaining scientific knowledge about the observed phenomena despite the black-box character of the models used, and thus to be able to use the potential of temporal deep learning approaches for the modelling of ecosystem processes.
The creation of depth maps is essential for numerous applications, such as autonomous driving or augmented reality. Typically, these are generated from stereo image pairs or with the help of active sensors (such as LiDAR or RGB-D cameras). Based on the monocular depth perception of humans, this project is dedicated to the estimation of depth maps from single images using artificial neural networks.