基于扩张卷积和Transformer的视听融合语音分离方法
Audio-visual Fusion Speech Separation Method Based on Dilated Convolution and Transformer
-
摘要: 为了提高语音分离的效果,除了利用混合的语音信号,还可以借助视觉信号作为辅助信息。这种融合了视觉与音频信号的多模态建模方式,已被证实可以有效地提高语音分离的性能,为语音分离任务提供了新的可能性。为了更好地捕捉视觉与音频特征中的长期依赖关系,并强化网络对输入上下文信息的理解,本文提出了一种基于一维扩张卷积与Transformer的时域视听融合语音分离模型。将基于频域的传统视听融合语音分离方法应用到时域中,避免了时频变换带来的信息损失和相位重构问题。所提网络架构包含四个模块:一个视觉特征提取网络,用于从视频帧中提取唇部嵌入特征;一个音频编码器,用于将混合语音转换为特征表示;一个多模态分离网络,主要由音频子网络、视频子网络,以及Transformer网络组成,用于利用视觉和音频特征进行语音分离;以及一个音频解码器,用于将分离后的特征还原为干净的语音。本文使用LRS2数据集生成的包含两个说话者混合语音的数据集。实验结果表明,所提出的网络在尺度不变信噪比改进(Scale-Invariant Signal-to-Noise Ratio Improvement,SI-SNRi)与信号失真比改进(Signal-to-Distortion Ratio Improvement,SDRi)这两种指标上分别达到14.0 dB与14.3 dB,较纯音频分离模型和普适的视听融合分离模型有明显的性能提升。Abstract: To improve the performance of speech separation, other than using the mixed speech signal, the visual signal may also serve as auxiliary information. This multimodal modeling method that integrates visual and audio signals has been proven to effectively improve the performance of speech separation and provides new possibilities for speech separation tasks. To better capture the long-term dependencies in visual and audio features and enhance the network’s understanding of contextual information in the input, this study proposes a time-domain audio-visual fusion speech separation model based on a one-dimensional dilated convolution and Transformer. The traditional audio-visual fusion speech separation method based on the frequency domain is applied to the time domain, avoiding the information loss and phase reconstruction problems caused by time-frequency transformation. The proposed network architecture consists of four modules: a visual feature extraction network, which extracts lip embedding features from video frames; an audio encoder, which converts the mixed speech into feature representation; a multimodal separation network, which consists of an audio subnetwork, video subnetwork, and Transformer network and uses visual and audio features for speech separation; and an audio decoder, which restores the separated features to clean speech. This study uses the LRS2 dataset to generate a dataset containing the mixed speech of two speakers. Experimental results reveal that the proposed network attains 14.0 dB and 14.3 dB improvements in scale-invariant signal-to-noise ratio and signal-to-distortion ratio metrics, respectively, significantly outperforming both the pure audio separation and universal audio-visual fusion models.