• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸°úÇÐȸ ³í¹®Áö > Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)

Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) ÀúÇØ»óµµ À̹ÌÁö ºÐ·ù¸¦ À§ÇÑ °íÇØ»óµµ À̹ÌÁö·ÎºÎÅÍÀÇ Self-Attention Á¤º¸ ÃßÃâ ³×Æ®¿öÅ©
¿µ¹®Á¦¸ñ(English Title) Low-Resolution Image Classification Using Knowledge Distillation From High-Resolution Image Via Self-Attention Map
ÀúÀÚ(Author) ½Å¼ºÈ£   ÀÌÁÖ¼ø   ÀÌÁؼ®   ÃÖ½ÂÁØ   ÀÌ±Ôºó   Sungho Shin   Joosoon Lee   Junseok Lee   Seungjun Choi   Kyoobin Lee  
¿ø¹®¼ö·Ïó(Citation) VOL 47 NO. 11 PP. 1027 ~ 1031 (2020. 11)
Çѱ۳»¿ë
(Korean Abstract)
±âÁ¸ÀÇ µö·¯´× ¸ðµ¨µéÀº °íÈ­ÁúÀÇ À̹ÌÁöµéÀ» È°¿ëÇÏ¿© ¿¬±¸ °³¹ßµÇ¾úÀ¸¸ç, È­ÁúÀÌ ³·¾ÆÁú¼ö·Ï ±Þ°ÝÈ÷ ¼º´ÉÀÌ ³·¾ÆÁø´Ù. º» ¿¬±¸´Â ÀúÈ­Áú À̹ÌÁö¿¡µµ È¿°úÀûÀ¸·Î ´ëÀÀÇÒ ¼ö ÀÖ´Â µö·¯´× ¸ðµ¨À» °³¹ßÇÏ°íÀÚ, °íÈ­ÁúÀÇ À̹ÌÁö·ÎºÎÅÍ È¿°úÀûÀ¸·Î ºÐ·ù¸¦ ÇÒ ¼ö ÀÖ´Â Á¤º¸¸¦ Attention MapÀÇ ÇüÅ·ΠÃßÃâÇß´Ù. ÀÌÈÄ Knowledge Distillation ±â¹ýÀ» È°¿ëÇÏ¿© °íÈ­Áú À̹ÌÁö»ó¿¡¼­ ÃßÃâÇÑ Attention MapÀ» ÀúÇØ»óµµ À̹ÌÁö ¸ðµ¨¿¡ Àü´ÞÇÏ´Â ³×Æ®¿öÅ©¸¦ Á¦¾ÈÇßÀ¸¸ç, 16¡¿16ÀÇ ÀúÇØ»óµµ CIFAR100¡¡À̹ÌÁö¸¦ ºÐ·ùÇßÀ» ¶§ ¿¡·¯À²À» 2.94% ³·Ãâ ¼ö ÀÖ¾ú´Ù. ÀÌ´Â 32¡¿32¿¡¼­ 16¡¿16À¸·Î À̹ÌÁö Çػ󵵸¦ ³·ÃèÀ» ¶§ ¿¡·¯ °¨¼ÒÀ²ÀÇ 38.43%¿¡ ÇØ´çÇÏ´Â ¼öÄ¡·Î, º» ³×Æ®¿öÅ©ÀÇ ¿ì¼ö¼ºÀ» ÀÔÁõÇÒ ¼ö ÀÖ¾ú´Ù.
¿µ¹®³»¿ë
(English Abstract)
Traditional deep-learning models have been developed using high-quality images. However, when the low resolution images are rendered, the performances of the model drop drastically. To develop a deep-learning model that can respond effectively to low-resolution images, we extracted the information from the model, which uses high-resolution images as input, in the form of the Attention Map. Using the knowledge distillation technique, the information delivering Attention Map, extracted from the high-resolution images to low-resolution image models, could reduce the error rate by 2.94%, when classifying the low-resolution CIFAR images of 16¡¿16 resolution. This was at 38.43% of the error reduction rate when the image resolution was lowered from 32¡¿32 to 16¡¿16, which could demonstrate excellence in this network.
Å°¿öµå(Keyword) ÀúÇØ»óµµ À̹ÌÁö   À̹ÌÁö ºÐ·ù   Á¤º¸ Àü´Þ ±â¹ý   low resolution image   image classification   knowledge distillation   self-attention map  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå