• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸°úÇÐȸ ³í¹®Áö > Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)

Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)

Current Result Document : 9 / 15 ÀÌÀü°Ç ÀÌÀü°Ç   ´ÙÀ½°Ç ´ÙÀ½°Ç

ÇѱÛÁ¦¸ñ(Korean Title) Áö½ÄÀÇ Áõ·ù±â¹ý(Knowledge Distillation)À» ÀÌ¿ëÇÑ Çѱ¹¾î ±¸±¸Á¶ ±¸¹® ºÐ¼® ¸ðµ¨ÀÇ ¾ÐÃà
¿µ¹®Á¦¸ñ(English Title) Compression of Korean Phrase Structure Parsing Model using Knowledge Distillation
ÀúÀÚ(Author) ȲÇö¼±   ÀÌâ±â   Hyunsun Hwang   Changki Lee  
¿ø¹®¼ö·Ïó(Citation) VOL 45 NO. 05 PP. 0451 ~ 0456 (2018. 05)
Çѱ۳»¿ë
(Korean Abstract)
Sequence-to-sequence ¸ðµ¨Àº ÀԷ¿­À» ±æÀÌ°¡ ´Ù¸¥ Ãâ·Â¿­·Î º¯È¯ÇÏ´Â end-to-end ¹æ½ÄÀÇ ¸ðµ¨·Î, ³ôÀº ¼º´ÉÀ» ³»±â À§ÇØ attention mechanism, input-feeding µîÀÇ ±â¼úµéÀ» »ç¿ëÇÏ¿© ¼Óµµ°¡ ´À·Á ½ÇÁ¦ ¼­ºñ½º¿¡ Àû¿ëµÇ±â ¾î·Æ´Ù´Â ´ÜÁ¡ÀÌ ÀÖ´Ù. º» ³í¹®¿¡¼­´Â ÇнÀµÈ Àΰø½Å°æ¸ÁÀ» ½ÇÁ¦ ¼­ºñ½º¿¡ Àû¿ëÇϱâ À§ÇØ È¿°úÀûÀ¸·Î ¸ðµ¨À» ¾ÐÃàÇÏ¿© ¼Óµµ¸¦ Çâ»ó½ÃÅ°´Â ¹æ¹ýÀÎ Áö½ÄÀÇ Áõ·ù±â¹ý(Knowledge distillation)ÀÇ ±â¼ú Áß¿¡¼­ ÀÚ¿¬¾î󸮸¦ À§ÇÑ sequence-level knowledge distillationÀ» Çѱ¹¾î ±¸±¸Á¶ ±¸¹® ºÐ¼®¿¡ Àû¿ëÇÏ¿© ¼º´ÉÀúÇϸ¦ ÃÖ¼ÒÈ­ÇÏ¿© ¸ðµ¨ ¾ÐÃàÀ» ½ÃµµÇÏ¿´´Ù. ½ÇÇè °á°ú hidden layerÀÇ Å©±â¸¦ 500¿¡¼­ 50±îÁö ÁÙ¿´À» ¶§ baseline ¸ðµ¨º¸´Ù F1 0.56%ÀÇ ¼º´ÉÇâ»óÀ» º¸¿´°í ¼Óµµ´Â 60.71¹è »¡¶óÁö´Â È¿°ú¸¦ º¸¿´´Ù.
¿µ¹®³»¿ë
(English Abstract)
A sequence-to-sequence model is an end-to-end model that transforms an input sequence into an output sequence of different lengths. However, it is difficult to apply to an actual service by using techniques such as attention mechanism and input-feeding to achieve high performance. In this paper, we apply the sequence-level knowledge distillation for natural language processing to the Korean phrase structure parsing, which is an effective technique for compressing the model. Experimental results show that when the size of the hidden layer is decreased from 500 to 50, the performance of F1 0.56% is improved and the speed is 60.71 times faster than that of the baseline model.
Å°¿öµå(Keyword) µö ·¯´×   ±¸±¸Á¶ ±¸¹® ºÐ¼®   sequence-to-sequence learning   ¸ðµ¨ ¾ÐÃà   deep learning   phrase structure parsing   sequence-to-sequence learning   model compression  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå