Abstract:
Although neural network has achieved great success in many pattern recognition tasks, because of its multi-layer non-linear transformation structure, it is difficult for people to intuitively understand and use this kind of model efficiently and accurately. This problem is particularly prominent in the current widely used deep neural network. Visualization technology, with its concise and intuitive characteristics, has become an important means for us to understand the working mechanism of complex models, which makes the visualization technology of neural networks become a hot academic research topic in deep learning. This paper focuses on the partition process of full-connected neural network to the original feature space, and analyses the role of full-connected neural network in the classification process from three angles of feature transformation, partition and coding. The formation and partitioning process of the cell, the smallest classification unit in the network, is analyzed and visualized. With the method of activation coding proposed in this paper, we can understand the partition of high-dimensional space which can not be visualized intuitively to a certain extent, and become a tool for defining and discussing the two phenomena of "compression" and "self-regularity".By analyzing the performance of different network structures under the same training data and the performance of the same network structure on different training data, the relationship among self-regularity, cell number and network learning ability is revealed.