JPEG原图重构取证方法

Forensic Method for JPEG Image Reconstruction

  • 摘要: 数字图像已被广泛用于办理各类网上业务和作为司法证据。与此同时,利用流行的图像编辑软件,普通用户就可以对图像进行语义篡改而不留下视觉痕迹。因此,对数字图像的原始性和真实性进行辨识已成为迫切的应用需求。基于元数据的图像篡改取证方法因准确性高、计算量小而得到重视。然而,原图重构技术(例如应用MagicEXIF元数据编辑器)的出现使上述方法完全失效。针对这一问题,本文提出一种JPEG图像原图重构取证方法,用于检测图像是否受到重构攻击。通过分析原图重构的过程,以及重构前后图像的像素统计特征差异,本文对深度学习隐写分析模型SRNet(Steganalysis Residual Network)进行轻量化改进:裁剪其冗余的下采样层以减少参数,引入通道注意力机制以提高对关键特征的提取能力,并采用知识蒸馏的方法进一步提升模型的准确率。进一步地,通过分析重构对不同颜色分量的影响,采用YCbCr 颜色分量作为模型输入,以提高检测性能。为测试算法的性能,我们收集了由不同品牌和型号的手机拍摄的图像数据,构建了大规模重构图像数据库。实验表明,本文提出的模型在参数量显著减少的情况下性能优于流行的模型,对512×512大小的图像可取得98%以上的检测正确率,且具有良好的跨设备泛化能力。同时,通过应用迁移学习,本文方法对不同版本的重构软件也具有较好的泛化性。

     

    Abstract: ‍ ‍Digital images have been widely used in various online businesses and as judicial evidence. Simultaneously, using popular image editing software, ordinary users can tamper with the image semantics without leaving visual traces. Therefore, identifying the originality and authenticity of digital images has become an urgent application requirement. Image tampering forensics based on metadata has garnered attention due to its high accuracy and minimal computational requirements. However, the emergence of original image reconstruction technologies, exemplified by tools like the MagicEXIF metadata editor, renders the aforementioned methods entirely ineffective. To solve this problem, this study proposes a JPEG original image reconstruction forensics method to detect if the image was reconstructed. By analyzing the original image reconstruction process and the difference in pixel statistical characteristics of the image before and after reconstruction, this study develops a lightweight improvement on the deep learning steganography analysis model steganalysis residual network (SRNet): it cuts its redundant lower sampling layer to reduce parameters, introduces the channel attention mechanism to improve the ability to extract key features, and uses the knowledge distillation method to further improve the accuracy of the model. Furthermore, by analyzing the influence of reconstruction on different color components, the YCbCr color component is used as the model input to improve the detection performance. To test the performance of the algorithm, we collected image data captured by mobile phones of different brands and models and built a large-scale reconstruction image dataset. The experiment demonstrates that the proposed model outperforms a popular model, even with a significantly reduced number of parameters. For a 512×512 image, the proposed model achieves a detection accuracy exceeding 98% and exhibits strong cross-device generalization capability. Simultaneously, through the application of transfer learning, the proposed method also achieved good generalization for different versions of reconstructed software.

     

/

返回文章
返回