Event-Based Deblurring via Pixel-Wise Image Blurry Degree
-
Graphical Abstract
-
Abstract
For image deblurring, current end-to-end deep learning methods typically employ shared convolution kernels to process all spatial locations across the entire image. Thus, the convolution kernel used remains the same for all positions in the image and does not adapt based on specific locations. This implies that such methods apply the same convolution kernel to all areas, regardless of the varying levels of blurriness in different regions. However, in some complex blurred scenarios, the use of shared convolution kernels may fail to effectively handle non-uniform blurs across an image. To address this, this paper proposes an innovative approach that leverages pixel-level blur degrees to enhance the performance of end-to-end image deblurring. Specifically, a network named DegreeNet is designed and trained to accurately estimate the blur degree map from input images and event data captured during the exposure time. Subsequently, through Degree-based Feature Modulation (DFM), the blur degree map adaptively modulates the features in DeblurNet, an end-to-end convolutional neural network specifically designed to restore clarity in blurred images, with dynamic convolution kernels to address regions with varying blur levels. This strategy enables spatially adaptive convolution for non-uniform blur removal, effectively eliminating non-uniform blur in images. Extensive experiments on synthetic datasets and real-world event datasets were conducted using public methods as baselines for DeblurNet. The experimental results indicate that the proposed method consistently enhances the performance of these existing methods on both synthetic and real datasets, demonstrating strong generalization capabilities.
-
-