‍ZENG Dazhi,ZHENG Le,ZENG Wenwen,et al. Instance segmentation with automotive radar detection points based on trajectory prior[J]. Journal of Signal Processing,2024,40(1):185-196. DOI: 10.16798/j.issn.1003-0530.2024.01.012
Citation: ‍ZENG Dazhi,ZHENG Le,ZENG Wenwen,et al. Instance segmentation with automotive radar detection points based on trajectory prior[J]. Journal of Signal Processing,2024,40(1):185-196. DOI: 10.16798/j.issn.1003-0530.2024.01.012

Instance Segmentation with Automotive Radar Detection Points Based on Trajectory Prior

  • ‍ ‍Instance segmentation on point clouds is a fundamental task in scene perception. The development of automotive radar in recent years has made it possible to segment based on radar detection points. The output of instance segmentation could be served as valuable input for the tracker, so as to assist the decision planning. However, existing instance segmentation methods for automotive radar face significant challenges that need to be addressed. One of the primary challenges is the sparsity of detection points compared to LiDAR. When multiple adjacent instances are densely distributed or when detection points of the same instance are widely spaced apart, conventional methods prove to be less effective. Additionally, instances can be mistakenly segmented when obstacles or traffic participants cause partial occlusion, leading to incomplete detection point information. These issues of over-segmentation and under-segmentation need to be resolved for accurate scene perception. To overcome this, we proposed an efficient trajectory prior-based framework, considering that the scene has time continuity and the trajectory prior information contains the number and some other information of instances during the time. By using the time relationship of trajectory prior, the feature fused the instance information before occlusion. The fused features were input into the deep learning network, and the center shifted distance from each detection point to the instance center was calculated, which could help us overcome the problems such as instance splitting. This distance measurement aided in maintaining the integrity of instances. Our proposed method fully utilized the temporal relationship between detection points and performed feature fusion based on the matching relationship calculated from the detection points of two adjacent frames. This approach resulted in richer fused features that compensate for occlusion, leading to improve instance segmentation accuracy. Finally, to validate the effectiveness of our method, we conducted a comprehensive simulation experiment. The effectiveness of our method was verified by experimental results, which demonstrate the superiority of our approach over other existing methods in the literature. Specifically, when compared to segmentation algorithms lacking trajectory prior information, our method achieved a remarkable improvement in both average coverage and average accuracy, with increases of 6.19% and 4.54%, respectively. Besides, the visualization results in typical scenarios showed that our algorithm could provide better segmentation when large vehicles are turning and when traffic participants are partially obscured. In conclusion, our research presents an efficient approach to instance segmentation in point clouds generated by automotive radar sensors. By incorporating trajectory prior information, our method addresses the challenges of over-segmentation and under-segmentation, leading to more accurate scene perception. The experimental results and visualization in typical scenarios highlight the effectiveness and potential of our approach for improving autonomous driving and advanced driver assistance systems. Our future work will delve deeper into utilizing trajectory prior information to enhance feature extraction and further enhance scene perception capabilities. Furthermore, since the integration of two frames can enhance segmentation performance, it is worthwhile to delve into the relationship between segmentation performance and the number of frames. And this will be included in our future research.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return