• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸°úÇÐȸ ³í¹®Áö > Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)

Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) ¿ªÁ¦°ö ºñ±ÕÀÏ ¾çÀÚÈ­ ±â¹ýÀ» È°¿ëÇÑ ½ÉÃþ½Å°æ¸ÁÀÇ ¿¡³ÊÁö È¿À²¼º °³¼±
¿µ¹®Á¦¸ñ(English Title) Exploiting Inverse Power of Two Non-Uniform Quantization Method to Increase Energy Efficiency in Deep Neural Networks
ÀúÀÚ(Author) ÃÖÁØ¿µ   À¯ÁØÇõ   Jun-Yeong Choi   Joonhyuk Yoo  
¿ø¹®¼ö·Ïó(Citation) VOL 47 NO. 01 PP. 0027 ~ 0035 (2020. 01)
Çѱ۳»¿ë
(Korean Abstract)
½ÉÃþ½Å°æ¸Á(DNN)ÀÇ ¿¬»ê º¹À⼺Àº °úµµÇÑ ¿¬»ê·®°ú ¿¡³ÊÁö ¼Òºñ¸¦ ÃÊ·¡Çϱ⠶§¹®¿¡ Á¦ÇÑÀûÀÎ ÀÚ¿øÀ» °¡Áø ÀÓº£µðµå µð¹ÙÀ̽º·ÎÀÇ DNN Àû¿ëÀ» ¾î·Æ°Ô ¸¸µå´Â ÁÖ¿äÇÑ ¿äÀÎÀÌ´Ù. À̸¦ ¿ÏÈ­Çϱâ À§ÇØ º» ³í¹®¿¡¼­´Â DNNÀÇ °¡ÁßÄ¡ Á¤¹Ðµµ¸¦ °¨¼Ò½ÃÅ°¸é¼­ ±âÁ¸ ¾çÀÚÈ­ ±â¹ý¿¡ ºñÇØ ´õ ¸¹Àº Èñ¼Ò¼ºÀ» ºÎ¿©ÇÏ¿© ¿¬»ê·®°ú ¿¡³ÊÁö ¼Òºñ¸¦ °¨¼Ò½Ãų ¼ö ÀÖ´Â ¿ªÁ¦°ö ºñ±ÕÀÏ ¾çÀÚÈ­ ±â¹ýÀ» Á¦¾ÈÇÑ´Ù. ¼­·Î ´Ù¸¥ ¸ÊÇÎ Á¤Ã¥À» °¡Áø ´Ù¾çÇÑ ±ÕÀÏ/ºñ±ÕÀÏ ¾çÀÚÈ­ ±â¹ýÀ» AlexNet°ú VGGNet ¸ðµ¨¿¡¼­ ±¸ÇöÇÏ¿© CIFAR-10°ú ImageNet µ¥ÀÌÅͼÂÀ» È°¿ëÇÑ À̹ÌÁö ºÐ·ù ÀÛ¾÷À» ÅëÇØ Á¦¾ÈµÈ ¿ªÁ¦°ö ¾çÀÚÈ­ ±â¹ýÀÇ Á¤È®µµ¿Í ¿¡³ÊÁö È¿À²¼ºÀ» ÀÔÁõÇÏ°í, À̸¦ ´õ¿í Çâ»ó½Ãų ¼ö ÀÖ´Â Ãß°¡ÀûÀÎ ÇнÀ ±â¹ýµµ Á¦½ÃÇÑ´Ù. ½ÇÇè °á°ú, Á¦¾ÈµÈ ¿ªÁ¦°ö ºñ±ÕÀÏ ±â¹ýÀ¸·Î ¾çÀÚÈ­µÈ AlexNet°ú VGGNet ¸ðµ¨ÀÇ ºñÆ®ÆøÀÌ 2ÀÎ °æ¿ì¿¡ ¿ÏÀü Á¤¹Ðµµ ±â¹ý¿¡ ºñÇØ Á¤È®µµ´Â °¢°¢ 2.2%¿Í 2.5%ÀÇ ¼Õ½ÇÀÌ ÀÖÁö¸¸, ¿¡³ÊÁö È¿À²¼º Ãø¸é¿¡¼­´Â ¿ÏÀü Á¤¹Ðµµ ´ëºñ °¢°¢ 63.2%¿Í 66.5% Á¤µµ·Î ¿¡³ÊÁö ¼Òºñ¸¦ °¨¼Ò½ÃÄ×´Ù.
¿µ¹®³»¿ë
(English Abstract)
DNN's computational complexity makes it difficult for application to embedded devices of limited resources because the deep learning requires high performance computing power and consumes considerable energy. To mitigate this, this paper proposes an energy-efficient Inverse Power of Two(IPow2) nonuniform quantization technique to induce more sparsity than the existing quantization methods while reducing precision of weights, resulting in the reduction of computational complexity as well as energy consumption in DNN. Accuracy and energy efficiency of the proposed IPow2 are quantitatively validated by executing image classification task with data sets of CIFAR- 10/ImageNet through implementing the quantized AlexNet/VGGNet models of a variety of mapping policies. Experimental results show that the proposed IPow2 method consumes less energy by 63.2% and 66.5% while achieving minor accuracy loss by 2.2% and 2.5% respectively compared with the full precision one, in case of two-bits quantization in the AlexNet/VGGNet models.
Å°¿öµå(Keyword) ½ÉÃþ½Å°æ¸Á   ¿ªÁ¦°ö ºñ±ÕÀÏ ¾çÀÚÈ­ ±â¹ý   ¿¡³ÊÁö È¿À²¼º DNN   energy-efficiency   inverse power of two non-uniform quantization  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå