阵列幅度/相位误差的有源校正新方法

New calibration method for array gain and phase errors with signal sources

  • 摘要: 实际应用中的阵列存在幅度/相位响应误差,往往导致阵列高分辨算法性能的下降。对此本文给出一种阵列幅度/相位响应误差的有源校正新方法,它基于子空间正交理论,通过对误差建模,将阵列校正问题转换成误差参数估计问题,并利用经典的Lagrange乘子法方便地得到最优解。另外,此方法只需要一个已知方位的校正源,计算简便,无需迭代,可用于阵元位置已知的任意形状的阵列,因此更适用于实际阵列安装应用前的校正。通过半消声室实验对16-元圆环形声阵列进行了测试和校正,数据处理结果表明,本文所提的校正方法能够准确地估计出各阵元幅度/相位误差,且校正前后MVDR (Minimum Variance Distortionless Response) 波束形成的性能得到明显提高。

     

    Abstract: High-resolution algorithms are very attractive in the field of array signal processing. While they are hardly applied in practice due to the performance degradation, which is caused by practical arrays’ imperfections such as gain and phase errors. To solve this problem, a new subspace-based calibration method with signal sources is proposed in this paper. By modeling the gain and phase errors of an array, the problem of array calibration could be transformed to a parameter estimation problem, which can be easily solved by the classical Lagrange multiplier method. This proposed calibration method only needs one signal source with known direction. Compared with the current auto-calibration methods, it possesses much less computation complexity and does not need any iteration. More importantly, this calibration method can be applied to arbitrary arrays with known sensor positions, such that it is more suitable to a real-world array before the array is equipped. To demonstrate the performance of the proposed calibration method, an experiment is conducted in a semi-anechoic room to measure and calibrate a sixteen-sensor circular acoustic array. The results have shown that the gain and phase errors can be estimated accurately by our proposed method. What’s more, we also show that the performance of Minimum Variance Distortionless Response (MVDR) beamformer has been improved a lot after calibration, which proves the correction and validity of our method further.

     

/

返回文章
返回