基于深度卷积与全局特征的图像密集字幕描述

Dense image caption with deep convolution and global visual

  • 摘要: 为了解决图像密集字幕描述中感兴趣区域(Regions of interest,ROI)定位不准确与区域粗粒度描述问题,本文提出了一种基于深度卷积与全局特征的图像密集字幕描述算法,该算法采用残差网络与并行LSTM(Long Short Term Memory)网络的联合模型对存在的区域重叠定位和粗粒度描述细节信息不完整问题进一步改进。首先利用深度残差网络与Faster R-CNN(Faster R-Convolutional Neural Network)的RPN(Regional Proposal Network)层获取更精准区域边界框,以便避免区域标记重叠;然后将全局特征、局部特征和上下文特征信息分别输入并行LSTM网络且采用融合算子将三种不同输出整合以获得最终描述语句。通过在公开数据集上与两种主流算法对比表明本文模型具有一定优越性。

     

    Abstract: In order to solve the problems of inaccurate location of Regions of interest (ROI) and coarse-grained description of Regions in dense image cption, in this paper, an dense image description algorithm based on deep convolution and global features is proposed. This algorithm adopts the joint model of Residual network and parallel LSTM(Long Short Term Memory) network to further improve the existing regional overlapping location and the incomplete coarse-grained description details. Firstly, the depth Residual Network and the RPN(Regional Proposal Network) layer of Faster R-CNN are used to obtain more accurate regional boundary frame, so as to avoid overlapping of regional markers. Then the global feature, local feature and context feature information are input into the parallel LSTM network respectively and the fusion operator is used to integrate the three different outputs to obtain the final description statement.Compared with two mainstream algorithms on the open data set, the model presented in this paper has some advantages.

     

/

返回文章
返回