• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

Ȩ Ȩ > ¿¬±¸¹®Çå >

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) Deep Learning Accelerator with MAC Optimization for Binary Neural Network
¿µ¹®Á¦¸ñ(English Title) Deep Learning Accelerator with MAC Optimization for Binary Neural Network
ÀúÀÚ(Author) Quang Hieu Vo   LokWon Kim   Choong Seon Hong  
¿ø¹®¼ö·Ïó(Citation) VOL 49 NO. 01 PP. 0957 ~ 0959 (2022. 06)
Çѱ۳»¿ë
(Korean Abstract)
¿µ¹®³»¿ë
(English Abstract)
Binarized Neural Network (BNN) with binary precision has been one of the promising methods in diminishing the complexity of Deep Neural Networks (DNNs) when implemented on hardware deivices. In particular, to utilize the binary chararcteristic, many well-known techniques have henn proposed to reduce hardware resources and power consumption. However, for recent neural network models that are deeper and bigger with more layers and channels, the neural network architectures based on current popular solutions are not optimally sufficient, especially at multiply-accumulation (MAC) operation- an essential part of a typical neural network layer. In this paper, a MAC optimization method is proposed to make the implementation for BNNs more efficient on FPGA devices. The implemented architecture with the proposed solution can reduce up to 2.62 times compared to the conventional direction.
Å°¿öµå(Keyword)
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå