• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸°úÇÐȸ ³í¹®Áö > Á¤º¸°úÇÐȸ ÄÄÇ»ÆÃÀÇ ½ÇÁ¦ ³í¹®Áö (KIISE Transactions on Computing Practices)

Á¤º¸°úÇÐȸ ÄÄÇ»ÆÃÀÇ ½ÇÁ¦ ³í¹®Áö (KIISE Transactions on Computing Practices)

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) ÀÇÁ¸ °ü°è¸í ÅÂ±× ºÐÆ÷¸¦ ÀÌ¿ëÇÑ Çѱ¹¾î ÀÇÁ¸ ±¸¹® ºÐ¼®
¿µ¹®Á¦¸ñ(English Title) Korean Dependency Parser using Dependency Relation Tag Distribution
ÀúÀÚ(Author) ¾ÈÀçÇö   °í¿µÁß   Jaehyun An   Youngjoong Ko  
¿ø¹®¼ö·Ïó(Citation) VOL 24 NO. 09 PP. 0487 ~ 0492 (2018. 09)
Çѱ۳»¿ë
(Korean Abstract)
ÀÇÁ¸ ±¸¹® ºÐ¼®Àº ÀÚ¿¬¾î ¹®ÀåÀÇ ±¸Á¶Àû °ü°è¸¦ ÆľÇÇÏ´Â ¾ð¾î ºÐ¼®ÀÇ ÇÑ ´Ü°èÀÌ´Ù. ÀÇÁ¸ ±¸¹® ºÐ¼®À» À§ÇØ º» ³í¹®¿¡¼­ »ç¿ëÇÏ´Â ±âº» ¸ðµ¨Àº ¼±Çà ¿¬±¸¿¡¼­ ³ôÀº ¼º´ÉÀ» º¸ÀÌ°í ÀÖ´Â Æ÷ÀÎÅÍ ³×Æ®¿öÅ©¸¦ »ç¿ëÇÏ¿´´Ù. ±×¸®°í ÀÇÁ¸ ±¸¹® ºÐ¼®Àº ÀÇÁ¸ °ü°è ¹× ÀÇÁ¸ °ü°è¸íÀ» µ¿½Ã¿¡ ºÎÂøÇØ¾ß ÇϹǷÎ, ¸ÖƼ ŽºÅ© ±â¹Ý ÇнÀÀ» ÁøÇàÇÏ¿´´Ù. ÀÇÁ¸ ±¸¹® ºÐ¼® ¼º´É °³¼±À» À§ÇØ º» ³í¹®¿¡¼­ Á¦¾ÈÇÏ´Â °ÍÀº task specific ÇÑ ÀÚÁúÀÎ ÇüżÒ, À½Àý ´ÜÀ§ÀÇ ÅÂ±× ºÐÆ÷¸¦ »ç¿ëÇÏ¿© °³¼±ÇÏ´Â °ÍÀÌ´Ù. ÇüżÒ, À½Àý ´ÜÀ§ÀÇ ÅÂ±× ºÐÆ÷°¡ ´Ü¾î¸¦ Ç¥ÇöÇÔ¿¡ ÀÖ¾î ¸¹Àº Á¤º¸¸¦ °¡Áö°í Àֱ⠶§¹®¿¡ ÅÂ±× ºÐÆ÷¸¦ ÀÌ¿ëÇÏ¿© ´Ü¾î Ç¥»óÀ» È®ÀåÇÏ¿´´Ù. ±×·¡¼­ º£À̽º¶óÀÎ ½Ã½ºÅÛ º¸´Ù ¼º´É ¸é¿¡¼­ °³¼±µÈ UAS 90.93%, LAS 88.29%ÀÇ Á¤È®µµ¸¦ °¡Áö´Â ¸ðµ¨À» Á¦¾ÈÇÑ´Ù.
¿µ¹®³»¿ë
(English Abstract)
Dependency parsing is a step in language analysis that predicts the syntax of sentences. For dependency parsing, the basic model used in this paper is the pointer network, which has performed well in previous research. Since dependency parsing requires to one attach the dependency pointing and dependency relation label at the same time, we apply a multitask learning technique to our dependency parsing. In order to improve the performance of dependency parsing, we propose to use the distributions of a task-specific morpheme and syllable-based relation labels. Since morphemeand syllable-based relation-label distributions have a lot of information in expressing words, we can extend the word representation by using the relation-label distribution. As a result, we propose a model that has better performance, UAS 90.93% and LAS 88.29%, than the baseline systems.
Å°¿öµå(Keyword) µö·¯´×   Çѱ¹¾î ÀÇÁ¸ ±¸¹® ºÐ¼®   Æ÷ÀÎÅÍ ³×Æ®¿öÅ©   ¸ÖƼŽºÅ© ÇнÀ   ÀÚ¿¬¾î 󸮠  deep learning   Korean dependency parser   pointer networks   multitask learning   natural language processing  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå