• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹ÀÎÅͳÝÁ¤º¸ÇÐȸ ³í¹®Áö

Çѱ¹ÀÎÅͳÝÁ¤º¸ÇÐȸ ³í¹®Áö

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) CNNÀÇ ±íÀº Ư¡°ú ÀüÀÌÇнÀÀ» »ç¿ëÇÑ º¸ÇàÀÚ ºÐ·ù
¿µ¹®Á¦¸ñ(English Title) Pedestrian Classification using CNN's Deep Features and Transfer Learning
ÀúÀÚ(Author) Á¤¼Ò¿µ   Á¤¹Î±³   Soyoung Chung   Min Gyo Chung  
¿ø¹®¼ö·Ïó(Citation) VOL 20 NO. 04 PP. 0091 ~ 0102 (2019. 08)
Çѱ۳»¿ë
(Korean Abstract)
ÀÚÀ²ÁÖÇà ½Ã½ºÅÛ¿¡¼­, Ä«¸Þ¶ó¿¡ Æ÷ÂøµÈ ¿µ»óÀ» ÅëÇÏ¿© º¸ÇàÀÚ¸¦ ºÐ·ùÇÏ´Â ±â´ÉÀº º¸ÇàÀÚ ¾ÈÀüÀ» À§ÇÏ¿© ¸Å¿ì Áß¿äÇÏ´Ù. ±âÁ¸¿¡´Â HOG(Histogram of Oriented Gradients)³ª SIFT(Scale-Invariant Feature Transform) µîÀ¸·Î º¸ÇàÀÚÀÇ Æ¯Â¡À» ÃßÃâÇÑ ÈÄ SVM(Support Vector Machine)À¸·Î ºÐ·ùÇÏ´Â ±â¼úÀ» »ç¿ëÇß¾úÀ¸³ª, º¸ÇàÀÚ Æ¯Â¡À» À§¿Í °°ÀÌ ¼öµ¿(handcrafted)À¸·Î ÃßÃâÇÏ´Â °ÍÀº ¸¹Àº ÇÑ°èÁ¡À» °¡Áö°í ÀÖ´Ù. µû¶ó¼­ º» ³í¹®¿¡¼­´Â CNN(Convolutional Neural Network)ÀÇ ±íÀº Ư¡(deep features)°ú ÀüÀÌÇнÀ(transfer learning)À» »ç¿ëÇÏ¿© º¸ÇàÀÚ¸¦ ¾ÈÁ¤ÀûÀÌ°í È¿°úÀûÀ¸·Î ºÐ·ùÇÏ´Â ¹æ¹ýÀ» Á¦½ÃÇÑ´Ù. º» ³í¹®Àº 2°¡Áö ´ëÇ¥ÀûÀÎ ÀüÀÌÇнÀ ±â¹ýÀÎ °íÁ¤Æ¯Â¡ÃßÃâ(fixed feature extractor) ±â¹ý°ú ¹Ì¼¼Á¶Á¤(fine-tuning) ±â¹ýÀ» ¸ðµÎ »ç¿ëÇÏ¿© ½ÇÇèÇÏ¿´°í, ƯÈ÷ ¹Ì¼¼Á¶Á¤ ±â¹ý¿¡¼­´Â 3°¡Áö ´Ù¸¥ Å©±â·Î ·¹À̾ Àü À̱¸°£°ú ºñÀüÀ̱¸°£À¸·Î ±¸ºÐÇÑ ÈÄ, ºñÀüÀ̱¸°£¿¡ ¼ÓÇÑ ·¹À̾îµé¿¡ ´ëÇؼ­¸¸ °¡ÁßÄ¡¸¦ Á¶Á¤ÇÏ´Â ¼³Á¤(M-Fine: Modified Fine-tuning)À» »õ·Ó°Ô Ãß°¡ÇÏ¿´´Ù. 5°¡Áö CNN¸ðµ¨(VGGNet, DenseNet, Inception V3, Xception, MobileNet)°ú INRIA Personµ¥ÀÌÅÍ ¼¼Æ®·Î ½ÇÇèÇÑ °á°ú, HOG³ª SIFT °°Àº ¼öµ¿ÀûÀΠƯ¡º¸´Ù CNNÀÇ ±íÀº Ư¡ÀÌ ´õ ÁÁÀº ¼º´ÉÀ» º¸¿©ÁÖ¾ú°í, XceptionÀÇ Á¤È®µµ(ÀÓ°èÄ¡ = 0.5)°¡ 99.61% ·Î °¡Àå ³ô¾Ò´Ù. Xception°ú À¯»çÇÑ ¼º´ÉÀ» ³»¸é¼­µµ 80% ÀûÀº ÆĶó¸ÞÅ͸¦ ÇнÀÇÑ MobileNetÀÌ È¿À²¼º Ãø¸é¿¡¼­´Â °¡Àå ¶Ù¾î³µ´Ù. ±×¸®°í 3°¡Áö ÀüÀÌÇнÀ ±â¹ýÁß ¹Ì¼¼Á¶Á¤ ±â¹ýÀÇ ¼º´ÉÀÌ °¡Àå ¿ì¼öÇÏ¿´°í, M-Fine ±â¹ýÀÇ ¼º´ÉÀº ¹Ì¼¼Á¶Á¤ ±â¹ý°ú ´ëµîÇϰųª Á¶±Ý ³·¾ÒÁö¸¸ °íÁ¤Æ¯Â¡ÃßÃâ ±â¹ýº¸´Ù´Â ³ô¾Ò´Ù.
¿µ¹®³»¿ë
(English Abstract)
In autonomous driving systems, the ability to classify pedestrians in images captured by cameras is very important for pedestrian safety. In the past, after extracting features of pedestrians with HOG(Histogram of Oriented Gradients) or SIFT(Scale-Invariant Feature Transform), people classified them using SVM(Support Vector Machine). However, extracting pedestrian characteristics in such a handcrafted manner has many limitations. Therefore, this paper proposes a method to classify pedestrians reliably and effectively using CNN¡¯s(Convolutional Neural Network) deep features and transfer learning. We have experimented with both the fixed feature extractor and the fine-tuning methods, which are two representative transfer learning techniques. Particularly, in the fine-tuning method, we have added a new scheme, called M-Fine(Modified Fine-tuning), which divideslayers into transferred parts and non-transferred parts in three different sizes, and adjusts weights only for layers belonging to non-transferred parts. Experiments on INRIA Person data set with five CNN models(VGGNet, DenseNet, Inception V3, Xception, and MobileNet) showed that CNN's deep features perform better than handcrafted features such as HOG and SIFT, and that the accuracy of Xception (threshold = 0.5) isthe highest at 99.61%. MobileNet, which achieved similar performance to Xception and learned 80% fewer parameters, was the best in terms of efficiency. Among the three transfer learning schemes tested above, the performance of the fine-tuning method was the best. The performance of the M-Fine method was comparable to or slightly lower than that of the fine-tuningmethod, but higher than that of the fixed feature extractor method.
Å°¿öµå(Keyword) º¸ÇàÀÚ ºÐ·ù   ÀüÀÌÇнÀ   ±íÀº Ư¡   CNN   INRIA Personµ¥ÀÌÅÍ ¼¼Æ®   Pedestrian Classification   Transfer Learning   Deep Features  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå