基于序贯特征提取的无人机与飞鸟目标分类

Classification of UAVs and Birds Using Sequential Feature Extraction

  • 摘要: 伴随无人机的在各个领域的广泛应用,其给国家和地区带来的威胁也与日俱增,对其进行有效的预警和反制迫在眉睫。随着无人机技术的进步,无人机预警和反制的难度随之增加。全息凝视雷达相比于传统雷达,使用宽发窄收的波束设计和积累探测技术,使得其对“低慢小”目标探测更具优势,在对无人机准确识别的基础上可联动其他种类的反制设备对其进行精准有效的反制。由于鸟和无人机两类目标在运动轨迹和机动性存在一定的相似性,如何有效实现两类目标的分类识别是全息凝视雷达面临的典型问题。结合实时分类需要即时输出当前航迹点类属标签的同时拥有较低计算复杂度的需求,本文基于全息凝视雷达航迹多普勒数据,提出了一种优化的基于序贯特征提取的分类流程,应用到雷达系统中时可随着目标轨迹的延伸实时输出航迹点类属标签。基于序贯特征提取降低了特征提取的原始数据维度,增加了两类目标在序贯窗口内相似的概率,增加了两类目标识别的难度,需要在序贯窗内这种低维度原始数据中所提取能很好的反映目标特性的特征。设计速度相邻窗间的相关系数等6个特征用以描述序贯窗口内目标速度变化的程度、速度变化稳定性和轨迹变化的程度,根据特征分类显著性分析,除速度标准差之外的特征的分布都有较好的类间隔离度。基于以上方法,本文利用WKNN分类器,综合分类准确率达到92%。

     

    Abstract: ‍ ‍With the wide application of drones in various fields, the threats they bring to countries and regions are increasing daily, and effective warnings and countermeasures are imminent. With advancements in drone technology, the difficulty of drone warning and countermeasures is also increasing. Compared to traditional radar, ubiquitous radar uses a wide transmission and narrow reception beam design, utilizing long-time accumulation techniques to obtain sufficient processing gain, thus detecting weak signals of “low, slow, and small” targets in complex clutter backgrounds. Based on high detection sensitivity, ubiquitous radar provides accurate target motion characteristics and high-resolution target Doppler characteristics. By utilizing these features, it is possible to detect and classify drone targets and then use other types of countermeasures to neutralize them effectively. Due to the similarity in the motion trajectory and maneuverability of birds and drones, effectively realizing the classification and identification of drones and birds is a typical problem faced by ubiquitous radar. Based on the ubiquitous radar trajectory Doppler data and real-time classification scenarios, this study proposes an optimized classification workflow based on sequential feature extraction, which includes sliding window feature extraction and real-time classification based on the sliding window. When applied to a system, this process can output the track point class labels in real-time as the target trajectory extends. The core of the real-time classification process is the sliding window-based feature extraction method, which reduces the data dimensionality of feature extraction and increases the probability of similarity between the two types of targets within the sequential window. This requires the features extracted from low-dimensional data to reflect the characteristics of the targets well. Based on the real-time classification problem, this study designs six features, including the correlation coefficient of adjacent window velocities, to describe the degree of velocity change, velocity change stability, and trajectory change stability within the sequential window. The feature classification significance analysis shows that apart from the velocity standard deviation, the distribution of other features extracted based on the sliding window has low overlap. Based on the above methods, this study uses field-measured data from ubiquitous radar for experiments to select the optimal classifier and optimal sequential window size. Random sampling experiments are then used to simulate the real-time operation of the radar system. The overall classification accuracy reaches 92%, and under the optimal window size, the classification accuracy of each track of the two types of targets changes stably. Finally, the classification results are displayed on Amap. The display results show that the drone trajectories and motion changes used in this study are diverse, and the dataset is good in terms of diversity. Experimental verification demonstrates that the proposed features are reasonable and effective and that the real-time classification process is feasible.

     

/

返回文章
返回