Titel: LIDAR & camera reference system for generating ground truth data for lane detection
Översikt
- Datum:Startar 9 juni 2023, 11:00Slutar 9 juni 2023, 12:00
- Plats:
- Språk:Svenska och Engelska
Examiner: Lars Hammarstrand
Abstract:
The development of autonomous vehicles, also called self-driving cars, has the poten-
tial to revolutionize transportation. Advanced sensors and algorithms enable these
vehicles to navigate and operate autonomously. To achieve high levels of safety and
reliability, autonomous vehicles require massive amounts of well-labeled data. As
a result, data annotation is crucial. As part of data annotation, various elements,
such as objects, pedestrians, and road markings, are manually labeled and tagged.
Annotating data is time-consuming, costly, and prone to human error. Hence, it is
desirable to automate and improve the annotation processes.
This thesis proposes three main ideas: A pipeline for automatically annotating 3D
lanes using LIDAR scans and 2D lane labels, a model for lane detection, and lastly,
combining past and future inference to improve lane detection. The annotation
pipeline considers the utilization of LIDAR scans before and after the current frame
to strengthen the ground truth. Our model incorporated two machine learning
frameworks: SuperFusion, and M2-3DLaneNet to generate 3D lane predictions. To
represent the 3D lanes, a grid representation of four classes (dashed, solid, other, and
empty) was used. Combining the 3D predictions over time improved the performance
toward ground truth. The evaluations demonstrate that the model utilizing more
LIDAR scans for each image frame performs better, particularly for shorter distances
(0-30m). The classified lane’s root mean square error (RMSE) in the horizontal
direction of the heading is approximately 5.5cm. Future improvements include longer
training time and the use of a more complex grid representation to better capture
the 3D lanes.
Welcome!
Kevin, Ismail and Lars