LIU Hongqing,XIE Qizhou,ZHAO Yu,et al. Audio-visual fusion speech separation method based on dilated convolution and Transformer[J]. Journal of Signal Processing, 2024, 40(7): 1208-1217. DOI: 10.16798/j.issn.1003-0530.2024.07.003
Citation: LIU Hongqing,XIE Qizhou,ZHAO Yu,et al. Audio-visual fusion speech separation method based on dilated convolution and Transformer[J]. Journal of Signal Processing, 2024, 40(7): 1208-1217. DOI: 10.16798/j.issn.1003-0530.2024.07.003

Audio-visual Fusion Speech Separation Method Based on Dilated Convolution and Transformer

  • ‍ ‍To improve the performance of speech separation, other than using the mixed speech signal, the visual signal may also serve as auxiliary information. This multimodal modeling method that integrates visual and audio signals has been proven to effectively improve the performance of speech separation and provides new possibilities for speech separation tasks. To better capture the long-term dependencies in visual and audio features and enhance the network’s understanding of contextual information in the input, this study proposes a time-domain audio-visual fusion speech separation model based on a one-dimensional dilated convolution and Transformer. The traditional audio-visual fusion speech separation method based on the frequency domain is applied to the time domain, avoiding the information loss and phase reconstruction problems caused by time-frequency transformation. The proposed network architecture consists of four modules: a visual feature extraction network, which extracts lip embedding features from video frames; an audio encoder, which converts the mixed speech into feature representation; a multimodal separation network, which consists of an audio subnetwork, video subnetwork, and Transformer network and uses visual and audio features for speech separation; and an audio decoder, which restores the separated features to clean speech. This study uses the LRS2 dataset to generate a dataset containing the mixed speech of two speakers. Experimental results reveal that the proposed network attains 14.0 dB and 14.3 dB improvements in scale-invariant signal-to-noise ratio and signal-to-distortion ratio metrics, respectively, significantly outperforming both the pure audio separation and universal audio-visual fusion models.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return