Abstract:
As an important part of image fusion technology, infrared and visible image fusion is widely used in military, industrial and civilian applications. It can integrate the complementary information of two modal images and fuse them into an image with abundant information and higher quality. It can not only highlight the target information, but also retain the texture and detail information of the scene. In this paper, a new infrared and visible image fusion method is proposed. Structured sparse constraint are exerted on the robust sparse representation model and consistency constraints of local region similarity are combined simultaneously, so that it can overcome the problems of local blurring and loss of texture details in some existing methods and improve the precision of image fusion. A structured sparse representation and consistency constraints model is first constructed, which is solved and is applied to infrared and visible image fusion simultaneously. Then the source images are decomposed into background information and saliency information. Then the fusion rules are designed for background and saliency information, respectively. Finally, and reconstruction is carried out by using the dictionary to get the final fusion. Experimental results demonstrate that the fusion algorithm proposed in this paper consistently outperforms existing state-of-art methods in terms of both visual and quantitative evaluations.