采用归一化最小均方误差准则的LM-BP算法

A LM-BP Method Based on Normalized Least Mean Square Error Criterion

  • 摘要: 传统神经网络通常以最小均方误差(LMS)或最小二乘(RLS)为收敛准则,而在自适应均衡等一些应用中,使用归一化最小均方误差(NLMS)准则可以使神经网络性能更加优越。本文在NLMS准则基础上,提出了一种以Levenberg-Marquardt(LM)训练的神经网络收敛算法。通过将神经网络的误差函数归一化,然后采用LM算法作为训练算法,实现了神经网络的快速收敛。理论分析和实验仿真表明,与采用最速下降法的NLMS准则和采用LM算法的LMS准则相比,本文算法收敛速度快,归一化均方误差更小,应用于神经网络水印系统中实现了水印信息的盲提取,能更好的抵抗噪声、低通滤波和重量化等攻击,性能平均提高了4%。

     

    Abstract: Traditional neural network always takes least mean square(LMS) or recursive least square (RLS) as convergence criterion. However, Normalized least mean square(NLMS) can achieve better network performance in some instances, such as adaptive equalization. NLMS criterion was introduced to Levenberg-Marquardt algorithm based neural network in this paper. By normalizing the output error of the network and adopting the LM algorithm as training method, the neural network converges fast. Theoretical analysis and experimental results show: compared with the steepest decent algorithm based NLMS criterion and the LM algorithm based LMS criterion, our neural network program converges fast and gets smaller normalized error. When approaching into the neural network watermark system, our method can achieve blind extraction and better robustness for channel attack, such as additive noise, low-pass filtering, re-quantifying, et al. Moreover, the performance achieves 4% increase on average.

     

/

返回文章
返回