• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸°úÇÐȸ ³í¹®Áö > Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)

Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) BERT ±â¹Ý End-to-end ½Å°æ¸ÁÀ» ÀÌ¿ëÇÑ Çѱ¹¾î »óÈ£ÂüÁ¶ÇØ°á
¿µ¹®Á¦¸ñ(English Title) Korean End-to-end Neural Coreference Resolution with BERT
ÀúÀÚ(Author) ±è±âÈÆ   ¹ÚõÀ½   ÀÌâ±â   ±èÇö±â   Kihun Kim   Cheonum Park   Changki Lee   Hyunki Kim  
¿ø¹®¼ö·Ïó(Citation) VOL 47 NO. 10 PP. 0942 ~ 0947 (2020. 10)
Çѱ۳»¿ë
(Korean Abstract)
»óÈ£ÂüÁ¶ÇØ°áÀº ÁÖ¾îÁø ¹®¼­¿¡¼­ »óÈ£ÂüÁ¶ÇØ°á ´ë»óÀÌ µÇ´Â ¸à¼Ç(mention)À» ½Äº°ÇÏ°í, °°Àº °³Ã¼(entity)¸¦ ÀǹÌÇÏ´Â ¸à¼ÇÀ» ã¾Æ ±×·ìÈ­ÇÏ´Â ÀÚ¿¬¾îó¸® Å½ºÅ©ÀÌ´Ù. Çѱ¹¾î »óÈ£ÂüÁ¶ÇØ°á¿¡¼­´Â ¸à¼Ç ŽÁö¿Í »óÈ£ÂüÁ¶ÇØ°áÀ» µ¿½Ã¿¡ ÁøÇàÇÏ´Â end-to-end ¸ðµ¨°ú Æ÷ÀÎÅÍ ³×Æ®¿öÅ© ¸ðµ¨À» ÀÌ¿ëÇÑ ¹æ¹ýÀÌ ¿¬±¸µÇ¾ú´Ù. ±¸±Û¿¡¼­ °ø°³ÇÑ BERT ¸ðµ¨Àº ÀÚ¿¬¾îó¸® Å½ºÅ©¿¡ Àû¿ëµÇ¾î ¸¹Àº ¼º´É Çâ»óÀ» º¸¿´´Ù. º» ³í¹® ¿¡¼­´Â Çѱ¹¾î »óÈ£ÂüÁ¶ÇØ°áÀ» À§ÇÑ BERT ±â¹Ý end-to-end ½Å°æ¸Á ¸ðµ¨À» Á¦¾ÈÇÏ°í, Çѱ¹¾î µ¥ÀÌÅÍ·Î »çÀü ÇнÀµÈ KorBERT¸¦ ÀÌ¿ëÇÏ°í, Çѱ¹¾îÀÇ ±¸Á¶Àû, ÀǹÌÀû Ư¡À» ¹Ý¿µÇϱâ À§ÇÏ¿© ÀÇÁ¸±¸¹®ºÐ¼® ÀÚÁú°ú °³Ã¼¸í ÀÚÁúÀ» Àû¿ëÇÑ´Ù. ½ÇÇè °á°ú, ETRI ÁúÀÇÀÀ´ä µµ¸ÞÀÎ »óÈ£ÂüÁ¶ÇØ°á µ¥ÀÌÅÍ ¼Â¿¡¼­ CoNLL F1 (DEV) 71.00%, (TEST) 69.01%ÀÇ ¼º´ÉÀ» º¸¿© ±âÁ¸ ¿¬±¸µé¿¡ ºñÇÏ¿© ³ôÀº ¼º´ÉÀ» º¸¿´´Ù.
¿µ¹®³»¿ë
(English Abstract)
Coreference resolution is a natural language task that identifies a mention that is a coreference resolution in a given document and finds and clusters the mention of the same entity. In the Korean coreference resolution, a method using the end-to-end model that simultaneously performs mention detection and mention clustering, and another method pointer network using the encoder-decoder model were used. The BERT model released by Google has been applied to natural language processing tasks and has demonstrated many performance improvements. In this paper, we propose a Korean end-to-end neural coreference resolution with BERT. This model uses the KorBERT pre-trained with the Korean data and applies dependency parsing results and the named entity recognition feature to reflect the structural and semantic characteristics of the Korean language. Experimental results show that the performance of the CoNLL F1 (DEV) 71.00% and (TEST) 69.01% in the ETRI Q & A domain data set was higher than the previous studies.
Å°¿öµå(Keyword) µö ·¯´×   »óÈ£ÂüÁ¶ÇØ°á   BERT   ÀÚ¿¬¾î󸮠  deep learning   coreference resolution   BERT   natural language processing  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå