基于CNN和VLAD的指静脉描述子提取方法

Finger Vein Descriptor Extraction Based on CNN and VLAD

  • 摘要: 为了提高指静脉描述子的鲁棒性同时降低网络参数量,通过修改VGGFace-Net并引入局部聚合描述子向量(Vector of Locally Aggregated Descriptors,VLAD)得到参数量仅0.3M的指静脉图像描述子提取网络,VLAD编码过程实现了局部描述子的聚类和重组,使描述子对手指姿势变化更加鲁棒;由于公开的指静脉训练数据库规模通常不够大,提出基于三元组和难分负样本挖掘策略进行网络的训练,并针对三元组损失没有约束样本对距离的类内方差的问题,提出一种样本对中心约束损失函数,通过将正负样本对视为两个类别,进一步促使其靠近各自的类中心,从而增大类内紧凑程度。在三个公开数据库FV-USM,SDUMLA,MMCBNU上的指静脉验证结果表明,所提取的描述子在基于欧氏距离进行匹配的情况下,指静脉验证的结果均优于现有方法,且在图像发生随机平移时具有更好的鲁棒性。

     

    Abstract: In order to increase the robustness of finger vein descriptors and reduce the number of network parameters, we proposed a way of modifying the VGGFace-Net and using the Vector of Locally Aggregated Descriptors(VLAD). The number of parameters of our obtained network is only 0.3M. The VLAD can cluster and rearrange the local descriptors, which makes our descriptors more robust against the changes of finger posture. Since the size of public finger vein databases is small, we trained the network by the triplet with the hard negative sample mining strategy. However, the Triplet Loss does not constrain the intra-class variance of the sample pair distance, we further proposed an improved loss called Pair-center-constrained loss. By treating positive and negative sample pairs as two categories, we can drive them even further to their class centers, which can increase the intra-class compactness. Experimental results on three public databases FV-USM, SDUMLA, and MMCBNU show that the proposed method is better than two state-of-the art methods in terms of accuracy. Meanwhile, our finger vein descriptors have better robustness against random translation.

     

/

返回文章
返回