Yadong Mao och Zhuqi Xiao, Elektroteknik
Titel: Decentralized Training of 3D Lane Detection with Automatic Labeling Using HD Maps
Examinator: Christopher Zach
Image-based 3D lane detection is one of the critical foundations of many Advanced Driver Assistance Systems (ADAS) features. These have recently been extended to end-to-end deep learning models, which allow casting the 3D lane detection to a one-stage object detection problem. Training such a model requires vast amount of labeled 3D lanes data collected by cars driving worldwide to deliver the target performance. However, data collection and transfer are not straightforward due to the emerging data-sharing policies. Federated Learning (FL) enables the training of AI models across multiple decentralized edge devices while ensuring data privacy, but without access to the skilled human annotators at the edge makes it challenging to access superior annotations.
In this report, we introduce an automatic labeling pipeline using a pre-recorded HD map as the primary source to label the data collected in edge devices automatically. As references, a semi-automatic approach is applied for creating a 3D lanes dataset used as ground truth by combining manually annotated 2D lane images with depth maps from aggregated LiDAR point clouds. The results show that our auto-labeling pipeline can generate high-precision 3D lane marking annotations with parts being accurate up to more than 100 meters, which makes it possible to be employed to train a 3D lane detection model in a fully automatic way.
In addition to training the model using the semi-automatically and fully-automatically generated datasets in a centralized manner, we also simulate the decentralized training with two clients and one server. The decentralized training experiments investigate the impact on the performance of training a 3D lane detection model in decentralized way.
In this report, we use the Flower framework to implement decentralized experiments. The clients' edge models are trained by datasets generated from semi-automatic and fully-automatic methods. The server's global model aggregates the edge model's parameters according to the FedAVG strategy. Finally, we found that for the same dataset, the average precision and F score (max) performance of the model obtained by federated learning is close to the model obtained by centralized learning. Although the model trained with the fully-automatic dataset performs slightly lower than the model trained with the semi-automatic data in the final result, we believe the model's performance will improve further as the dataset expands. This means we can train a model with excellent performance through federated learning while keeping the data locally on the edge devices.
Yadong, Zhuqi and Christopher
Join the seminar by Zoom