基于质量感知域适应的水下图像增强质量评价

Quality-Aware Domain Adaptation for Underwater Image Enhancement Quality Assessment

  • 摘要: 随着水下研究领域对高质量水下图像需求的不断增长,水下图像增强(underwater image enhancement,UIE)算法已经得到了广泛应用。为了评估这些水下增强图像的质量,研究者提出了一些水下图像增强质量评估(underwater image enhancement quality assessment,UIEQA)算法。然而,在已知水下场景上训练的UIEQA算法往往难以在未知水下场景中有效推广应用。此外,现有的UIEQA算法通常依赖于大量标注数据,而这些数据的获取通常非常困难且耗费资源。为了解决以上两个问题,本文提出了基于质量感知域适应的水下图像增强质量评价(quality-aware domain adaptation-based underwater image enhancement quality assessment,QaDA-UIEQA)算法。具体来说,该方法包括一个质量评价模块和一个质量感知域适应模块。首先,利用质量评价模块对源域数据进行有监督的质量评价训练,确保主任务的准确性。其次,质量感知域适应模块以文本信息为引导,利用跨注意力(cross attention,CA)模块从视觉特征信息中提取与质量特性相关的重要信息。随后,利用域适应技术缩小源域和目标域在质量特性上的差距,从而使得在已知水下场景上训练的模型能够有效泛化到未知的水下场景中。在SAUD+数据集上的实验结果表明,本文提出的方法相比于其他13种现有方法在四个关键性能指标上均取得了最优结果。其中,斯皮尔曼等级相关系数(spearman rank correlation coefficient,SRCC)相较于次优模型提升了8.5%。此外,消融研究证明我们提出的多模态方法对模型性能的提升具有显著作用。本文方法不仅在UIEQA上展现出卓越的性能,且在预测精度和组最大微分竞争的泛化能力上超越了其他对比方法。因此,QaDA-UIEQA具有更强的泛化性和鲁棒性,能在复杂的实际应用场景中保持高效稳定的表现。

     

    Abstract: As the demand for high-quality underwater images continues to grow in the field of underwater research, underwater image enhancement (UIE) algorithms have been widely applied. To evaluate the quality of these enhanced underwater images, researchers have proposed several underwater image enhancement quality assessment (UIEQA) algorithms. However, UIEQA algorithms trained on known underwater scenes often face challenges when applied in unknown underwater scenes. Additionally, existing UIEQA algorithms typically rely on large amounts of annotated data, which are often difficult and resource-intensive to obtain. To address these issues, this paper proposes a quality-aware domain adaptation-based underwater image enhancement quality assessment (QaDA-UIEQA) algorithm. The proposed method includes quality assessment and quality-aware domain adaptation modules. First, the quality assessment module performs supervised quality assessment training on the source domain data to ensure the accuracy of the main task. Second, the quality-aware domain adaptation module, guided by textual information, used a cross-attention (CA) module to extract important quality characteristic information from visual feature information. Then, domain adaptation techniques were used to narrow the gap in quality characteristics between the source and target domains, thus enabling models trained on known underwater scenes to effectively generalize to unknown underwater scenes. Experimental results on the SAUD+ dataset showed that the proposed method achieves optimal results on four key performance metrics, compared with 13 other existing methods. Among them, the SRCC improved by 8.5%, compared with the second-best model. Additionally, ablation studies demonstrated that our proposed multimodal approach significantly enhances model performance. The proposed method not only exhibited excellent performance in UIEQA but also surpassed other comparison methods in terms of prediction accuracy and generalization capability in a group maximum differentiation competition. Therefore, QaDA-UIEQA has stronger generalization and robustness, and it can maintain efficient and stable performance in complex real-world applications.

     

/

返回文章
返回