自学习稀疏密集连接卷积神经网络图像分类方法

Self-learning Sparse DenseNet Image Classification Method

  • 摘要: 通道剪枝是深度模型压缩的主要方法之一。针对密集连接卷积神经网络中,每一层都接收其前部所有卷积层的输出特征图作为输入,但并非每个后部层都需要所有先前层的特征,网络中存在很大冗余的缺点。本文提出一种自学习剪枝密集连接网络中冗余通道的方法,得到稀疏密集连接卷积神经网络。首先,提出了一种衡量每个卷积层中每个输入特征图对输出特征图贡献度大小的方法,贡献度小的输入特征图即为冗余特征图;其次,介绍了通过自学习,网络分阶段剪枝冗余通道的训练过程,得到了稀疏密集连接卷积神经网络,该网络剪枝了密集连接网络中的冗余通道,减少了网络参数,降低了存储和计算量;最后,为了验证本文方法的有效性,在图像分类数据集CIFAR-10/100上进行了实验,在不牺牲准确率的前提下减小了模型冗余。

     

    Abstract: Channel pruning is one of the main methods of depth model compression. DenseNet is one of the widely applied deep convolutional neural networks in image classification. In the DenseNet, each layer receives the output feature maps of all the convolutional layers in front of it as input. But not each layer needs all the features of the previous layers. There is a lot of redundancy in the DenseNet. Aiming at the shortcoming, this paper proposes a method for removing redundant channels in the DenseNet through self-learning. We get a sparse densely concatenated convolutional neural network. First, the method for measuring the contribution of each input feature map to the output feature map is proposed in each convolution layer. The input feature map with small contribution is the redundant feature map. Secondly, we introduce the training process of removing redundant channels in stages through self-learning. A sparse densely concatenated convolutional neural network is obtained, which prune the redundant channels, reducing network parameters, storage and computation. Finally, in order to show the effectiveness of the method, we performed experiments on the image classification dataset CIFAR-10/100. Experiments show that model redundancy is reduced without sacrificing accuracy.

     

/

返回文章
返回