Skip to Main content Skip to Navigation
New interface

Deep learning for radar data exploitation of autonomous vehicle

Abstract : Autonomous driving requires a detailed understanding of complex driving scenes. The redundancy and complementarity of the vehicle’s sensors provide an accurate and robust comprehension of the environment, thereby increasing the level of performance and safety. This thesis focuses the on automotive RADAR, which is a low-cost active sensor measuring properties of surrounding objects, including their relative speed, and has the key advantage of not being impacted by adverse weather conditions.With the rapid progress of deep learning and the availability of public driving datasets, the perception ability of vision-based driving systems (e.g., detection of objects or trajectory prediction) has considerably improved. The RADAR sensor is seldom used for scene understanding due to its poor angular resolution, the size, noise, and complexity of RADAR raw data as well as the lack of available datasets. This thesis proposes an extensive study of RADAR scene understanding, from the construction of an annotated dataset to the conception of adapted deep learning architectures.First, this thesis details approaches to tackle the current lack of data. A simple simulation as well as generative methods for creating annotated data will be presented. It will also describe the CARRADA dataset, composed of synchronised camera and RADAR data with a semi-automatic method generating annotations on the RADAR representations.This thesis will then present a proposed set of deep learning architectures with their associated loss functions for RADAR semantic segmentation. The proposed architecture with the best results outperforms alternative models, derived either from the semantic segmentation of natural images or from RADAR scene understanding,while requiring significantly fewer parameters. It will also introduce a method to open up research into the fusion of LiDAR and RADAR sensors for scene understanding.Finally, this thesis exposes a collaborative contribution, the RADIal dataset with synchronised High-Definition (HD) RADAR, LiDAR and camera. A deep learning architecture is also proposed to estimate the RADAR signal processing pipeline while performing multitask learning for object detection and free driving space segmentation simultaneously.
Complete list of metadata
Contributor : ABES STAR :  Contact
Submitted on : Friday, March 11, 2022 - 5:42:09 PM
Last modification on : Saturday, March 12, 2022 - 3:07:11 AM


Version validated by the jury (STAR)


  • HAL Id : tel-03606384, version 1



Arthur Ouaknine. Deep learning for radar data exploitation of autonomous vehicle. Computer Vision and Pattern Recognition [cs.CV]. Institut Polytechnique de Paris, 2022. English. ⟨NNT : 2022IPPAT007⟩. ⟨tel-03606384⟩



Record views


Files downloads