YANG Yuting,ZHAO Lingjun,ZHAO Lulu,et al. Integrated registration method with enhanced position awareness for optical and SAR images[J]. Journal of Signal Processing,2024,40(3):557-568. DOI: 10.16798/j.issn.1003-0530.2024.03.014.
Citation: YANG Yuting,ZHAO Lingjun,ZHAO Lulu,et al. Integrated registration method with enhanced position awareness for optical and SAR images[J]. Journal of Signal Processing,2024,40(3):557-568. DOI: 10.16798/j.issn.1003-0530.2024.03.014.

Integrated Registration Method with Enhanced Position Awareness for Optical and SAR Images

  • ‍ ‍Image registration is the basis for optical and SAR image information fusion. Most of the existing typical registration methods rely on feature-point detection and matching. However, because of their poor applicability to different scene regions, these methods are prone to problems such as excessive mismatched points and insufficiently effective matched points, resulting in invalid registration. Therefore, this study investigated an integrated registration method with enhanced position awareness for optical and SAR images. This method utilizes a deep neural network to directly regress the geometric transformation relationship between images. The proposed method achieves end-to-end high-precision registration without relying on feature-point detection. First, a feature-extraction module that integrates coordinate attention is used in the backbone network to extract position-sensitive fine-grained features from the input image pairs. Second, the multiscale features of the backbone network output are fused, taking into account the positional information of low-level features and semantic information of high-level features. Finally, a loss function that combines the position deviation and image similarity is used to optimize the registration results. Experimental results based on a publicly available high-resolution optical and SAR dataset (OS-Dataset) demonstrated that compared with four existing typical algorithms (OS-SIFT, RIFT2, DHN, and DLKFM), the proposed method had good robustness for different scene areas such as urban, farmland, river, repetitive texture, and weak texture scenes, and outperformed the existing algorithms in terms of visual effects and a quantitative precision metric. The percentage of average corner errors of fewer than 3 pixels was more than 25% better than that of DLKFM, which had the highest precision among the four algorithms. The registration speed was comparable to that of DHN, which was the fastest of the four algorithms. The proposed method could achieve high-precision and high-efficiency optical and SAR image registration.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return