Abstract:
In order to further improve the performance of speech bandwidth extension based on the deep learning, this paper presents a codec for the neural network structure. The encoder extracts the deep feature of data, the decoder reconstructs wideband speech, and in the middle of the codec, there is a locality sensitive hashing self-attention layer, which is used to enhance the model effective choice of depth characteristics. Temporal convolutional networks are used in the codec, which effectively improves the learning ability of the model to the context dependency of speech time series data. In order to train the model in a more accurate direction, a time-frequency perception loss function is proposed, which is beneficial for the model to obtain the optimal mapping solution from narrowband speech to wideband speech in time domain, frequency domain and perception domain. The subjective and objective experimental results show that the proposed method in this paper is superior to the traditional methods and the deep neural network methods for speech bandwidth extension in recent years.