利用迁移学习的量化核最小平均P范数算法
The Quantization Kernel Least Mean P-norm Algorithm Using Transfer Learning
-
摘要: 在α稳定分布噪声背景下,核最小平均P范数算法(KLMP)的性能显著优于核最小均方算法(KLMS),但KLMP算法的计算量和存储容量都随迭代次数线性增加,不便实际应用。针对此问题,该文应用迁移学习理论,将基于样本实例获得的总滤波器划分为具有局部紧支撑结构的子滤波器之和,每个子滤波器的训练分别受不同的输入驱动,提出了最近实例质心估计核最小平均P范数算法(NICE-KLMP);为进一步减小存储容量,将在线矢量量化应用到该算法中,提出最近实例质心估计量化核最小平均P范数算法(NICE-QKLMP)。α稳定分布噪声背景下的 Mackey-Glass 时间序列预测的仿真结果表明,NICE-KLMP和NICE-QKLMP算法的复杂度显著低于KLMP算法,抗脉冲噪声性能显著强于NICE-KLMS算法,与KLMP算法相当。
Abstract: In α-stable distribution noise environment, the performance of the kernel least mean P-norm algorithm (KLMP) is significantly better than that of the kernel least mean square algorithm (KLMS). However, the computational complexity and storage capacity of the KLMP algorithm increase linearly with the number of iterations, which is inconvenient for practical applications. To solve this problem, the-nearest-instance-centroid-estimation kernel least mean P-norm algorithm (NICE-KLMP) is proposed, which applies transfer learning to divide the total filter based on the sample instance into partial subfilters that have partial tight support, the training of each subfilter is driven by different inputs. To further reduce the storage capacity. the online vector quantization technique is introduced and the-nearest-instance-centroid-estimation quantization kernel least mean P-norm algorithm (NICE-QKLMP) is proposed. The simulation results of prediction of a Mackey-Glass time series in α-stable distribution noise show that the complexity of NICEKLMP and NICE-QKLMP algorithm is significantly lower than that of KLMP algorithm, and the performance of the impulse noise rejection is significantly stronger than NICE-KLMS algorithm, and equivalent to the KLMP algorithm.