• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸Ã³¸®ÇÐȸ ³í¹®Áö > Á¤º¸Ã³¸®ÇÐȸ ³í¹®Áö ¼ÒÇÁÆ®¿þ¾î ¹× µ¥ÀÌÅÍ °øÇÐ

Á¤º¸Ã³¸®ÇÐȸ ³í¹®Áö ¼ÒÇÁÆ®¿þ¾î ¹× µ¥ÀÌÅÍ °øÇÐ

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) Çâ»óµÈ À½Çâ ½ÅÈ£ ±â¹ÝÀÇ À½Çâ À̺¥Æ® ºÐ·ù
¿µ¹®Á¦¸ñ(English Title) Enhanced Sound Signal Based Sound-Event Classification
ÀúÀÚ(Author) ÃÖ¿ëÁÖ   ÀÌÁ¾¿í   ¹Ú´ëÈñ   Á¤¿ëÈ­   Yongju Choi   Jonguk Lee   Daihee Park   Yongwha Chung  
¿ø¹®¼ö·Ïó(Citation) VOL 08 NO. 05 PP. 0193 ~ 0204 (2019. 05)
Çѱ۳»¿ë
(Korean Abstract)
¼¾¼­ ±â¼ú°ú ÄÄÇ»Æà ¼º´ÉÀÇ Çâ»óÀ¸·Î ÀÎÇÑ µ¥ÀÌÅÍÀÇ ÆøÁõÀº »ê¾÷ ÇöÀåÀÇ »óȲÀ» ºÐ¼®Çϱâ À§ÇÑ Åä´ë°¡ µÇ¾úÀ¸¸ç, ÀÌ¿Í °°Àº µ¥ÀÌÅ͸¦ ±â¹ÝÀ¸·Î ÇöÀå¿¡¼­ ¹ß»ýÇÏ´Â ´Ù¾çÇÑ À̺¥Æ®¸¦ ŽÁö ¹× ºÐ·ùÇÏ·Á´Â ½ÃµµµéÀÌ ÃÖ±Ù Áõ°¡ÇÏ°í ÀÖ´Ù. ƯÈ÷ À½Çâ ¼¾¼­´Â »ó´ëÀûÀ¸·Î Àú°¡ÀÇ °¡°ÝÀ¸·Î ÇöÀå Á¤º¸¸¦ ¿Ö°î ¾øÀÌ À½Çâ ½ÅÈ£¸¦ ¼öÁýÇÒ ¼ö ÀÖ´Ù´Â Å« ÀåÁ¡À» ±â¹ÝÀ¸·Î ´Ù¾çÇÑ ºÐ¾ß¿¡ ¼³Ä¡µÇ°í ÀÖ´Ù. ±×·¯³ª ¼Ò¸® Ãëµæ ½Ã ¹ß»ýÇÏ´Â ÀâÀ½À» È¿°úÀûÀ¸·Î Á¦¾îÇÏÁö ¸øÇÑ´Ù¸é »ê¾÷ ÇöÀåÀÇ À̺¥Æ®¸¦ ¾ÈÁ¤ÀûÀ¸·Î ºÐ·ùÇÒ ¼ö ¾øÀ¸¸ç, ºÐ·ùÇÏÁö ¸øÇÑ À̺¥Æ®°¡ ÀÌ»ó »óȲÀ̶ó¸é ÀÌ·Î ÀÎÇÑ ÇÇÇØ´Â ¸·´ëÇØÁú ¼ö ÀÖ´Ù. º» ¿¬±¸¿¡¼­´Â ÀâÀ½ »óȲ¿¡¼­µµ °­ÀÎÇÑ ½Ã½ºÅÛÀ» º¸ÀåÇϱâ À§ÇÏ¿©, µö·¯´× ¾Ë°í¸®ÁòÀ» ±â¹ÝÀ¸·Î ÀâÀ½ÀÇ ¿µÇâÀ» °³¼± ½ÃŲ À½Çâ ½ÅÈ£¸¦ »ý¼ºÇÑ ÈÄ, ÇØ´ç À½Çâ À̺¥Æ®¸¦ ºÐ·ùÇÒ ¼ö ÀÖ´Â ½Ã½ºÅÛÀ» Á¦¾ÈÇÑ´Ù. ƯÈ÷, GANÀ» ±â¹ÝÀ¸·Î VAE ±â¼úÀ» Àû¿ëÇÑ SEGANÀ» È°¿ëÇÏ¿© ¾Æ³¯·Î±× À½Çâ ½ÅÈ£ ÀÚü¿¡¼­ ÀâÀ½ÀÌ Á¦°ÅµÈ ½ÅÈ£¸¦ »ý¼ºÇÏ¿´À¸¸ç, Çâ»óµÈ À½Çâ ½ÅÈ£¸¦ µ¥ÀÌÅÍ º¯È¯°úÁ¤ ¾øÀÌ CNN ±¸Á¶ÀÇ ÀÔ·Â µ¥ÀÌÅÍ·Î È°¿ëÇÑ ÈÄ À½Çâ À̺¥Æ®¿¡ ´ëÇÑ ½Äº°±îÁöµµ °¡´ÉÇϵµ·Ï end-to-end ±â¹ÝÀÇ À½Çâ À̺¥Æ® ºÐ·ù ½Ã½ºÅÛÀ» ¼³°èÇÏ¿´´Ù. »ê¾÷ ÇöÀå¿¡¼­ ÃëµæÇÑ À½Çâ µ¥ÀÌÅ͸¦ È°¿ëÇÏ¿© Á¦¾ÈÇÏ´Â ½Ã½ºÅÛÀÇ ¼º´ÉÀ» ½ÇÇèÀûÀ¸·Î °ËÁõÇѹÙ, 99.29%(öµµ»ê¾÷)¿Í 97.80%(Ãà»ê¾÷)ÀÇ ¾ÈÁ¤ÀûÀÎ ºÐ·ù ¼º´ÉÀ» È®ÀÎÇÏ¿´´Ù.
¿µ¹®³»¿ë
(English Abstract)
The explosion of data due to the improvement of sensor technology and computing performance has become the basis for analyzing the situation in the industrial fields, and various attempts to detect events based on such data are increasing recently. In particular, sound signals collected from sensors are used as important information to classify events in various application fields as an advantage of efficiently collecting field information at a relatively low cost. However, the performance of sound-event classification in the field cannot be guaranteed if noise can not be removed. That is, in order to implement a system that can be practically applied, robust performance should be guaranteed even in various noise conditions. In this study, we propose a system that can classify the sound event after generating the enhanced sound signal based on the deep learning algorithm. Especially, to remove noise from the sound signal itself, the enhanced sound data against the noise is generated using SEGAN applied to the GAN with a VAE technique. Then, an end-to-end based sound-event classification system is designed to classify the sound events using the enhanced sound signal as input data of CNN structure without a data conversion process. The performance of the proposed method was verified experimentally using sound data obtained from the industrial field, and the f1 score of 99.29% (railway industry) and 97.80% (livestock industry) was confirmed.
Å°¿öµå(Keyword) ÀâÀ½ °ß°í¼º   À½Çâ ½ÅÈ£ »ý¼º   End-to-End ±¸Á¶   µö·¯´×   Noise Robustness   Sound Signal Generation   End-to-End Architecture   Deep Learning  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå