Research on FPGA-Based Neural Network Hardware Accelerators: A Review
-
Abstract
With the widespread application of deep learning in fields such as computer vision, natural language processing, and autonomous driving, the complexity and scale of neural network models have grown explosively. This growth poses significant challenges to hardware computing capabilities. Traditional general-purpose computing platforms, such as CPUs and GPUs, are increasingly falling short in energy efficiency, real-time performance, and flexibility, particularly in edge computing and low-power scenarios, where their performance often fails to meet expectations. Consequently, algorithm optimization and hardware acceleration for neural networks have become prominent topics in current research. To address these challenges, field-programmable gate arrays (FPGAs), as reconfigurable hardware, have demonstrated unique advantages in deep learning hardware acceleration due to their parallelism, low power consumption, and flexible programmability. This paper systematically reviews FPGA-based neural network hardware acceleration technologies, covering the latest research progress in computing architecture optimization, hierarchical memory design, and model compression methods. It provides a detailed analysis of the computational characteristics and hardware acceleration frameworks of mainstream neural network models, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), generative adversarial networks (GANs), and Transformers. Additionally, the paper outlines core FPGA acceleration techniques such as parallel computing architectures with double-buffering strategies, sparse matrix computation, and structured pruning. Finally, it discusses the challenges faced by FPGA-based neural network accelerators, including model optimization under resource-constrained conditions and the limited adaptability between algorithms and hardware. A series of feasible solutions are proposed, and future research directions are explored.
-
-