Disparity-Guided Deformable Convolution for Light Field Image Super-Resolution
-
Graphical Abstract
-
Abstract
Light field image super-resolution reconstruction aims to enhance the resolution of light field images by utilizing complementary information across different views. This process focuses on restoring image details and improving overall image quality. The primary devices for capturing light field images are microlens cameras (such as Lytro and RayTrix cameras) and camera arrays. In microlens cameras, the maximum disparity between the images recorded from different viewpoints is typically less than one pixel. In contrast, for camera arrays, this maximum disparity can exceed one pixel. Most existing super-resolution reconstruction methods for light field images are tailored for microlens cameras; however, when applied to large disparity images (e.g., those captured by array cameras), these methods often experience significant performance degradation due to inadequate utilization of complementary information among different views. Inspired by light field disparity estimation techniques and deformable convolutional networks, this paper presents a disparity-guided deformable convolution method for light field image super-resolution. This method captures complementary information in the view domain of light field images with large disparities. The proposed approach begins by estimating the disparity of each sub-view image within the light field. It then generates deformable convolution offsets based on the disparity map, facilitating inter-view feature alignment and the combination of complementary information. Ultimately, the method achieves feature fusion and super-resolution reconstruction through a multi-level distillation mechanism. The effectiveness of the algorithm is validated on five widely used public light field datasets. Experimental results demonstrate that the proposed method achieves state-of-the-art performance in super-resolution reconstruction and is robust against large disparities.
-
-