• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸°úÇÐȸ ³í¹®Áö > Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)

Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)

Current Result Document : 5 / 328 ÀÌÀü°Ç ÀÌÀü°Ç   ´ÙÀ½°Ç ´ÙÀ½°Ç

ÇѱÛÁ¦¸ñ(Korean Title) »çÀü ÇнÀµÈ Çѱ¹¾î ¾ð¾î ¸ðµ¨ÀÇ º¸Á¤
¿µ¹®Á¦¸ñ(English Title) Calibration of Pre-trained Language Model for the Korean Language
ÀúÀÚ(Author) Á¤¼Ò¿µ   ¾ç¿ø¼®   ¹ÚäÈÆ   ¹ÚÁ¾Ã¶   Soyeong Jeong   Wonsuk Yang   ChaeHun Park   Jong C. Park  
¿ø¹®¼ö·Ïó(Citation) VOL 48 NO. 04 PP. 0434 ~ 0443 (2021. 04)
Çѱ۳»¿ë
(Korean Abstract)
½ÉÃþ ÇнÀ ¸ðµ¨ÀÇ ¹ßÀüÀº ÄÄÇ»ÅÍ ºñÀü, ÀÚ¿¬¾ð¾î ÀÌÇØ ¹®Á¦µé¿¡¼­ Àΰ£À» ¶Ù¾î³Ñ´Â ¼º´ÉÀ» º¸ÀÌ°í ÀÖ´Ù. ƯÈ÷ Æ®·£½ºÆ÷¸Ó ±â¹ÝÀÇ »çÀü ÇнÀ ¸ðµ¨Àº ÁúÀÇÀÀ´ä, ´ëÈ­¹®°ú °°Àº ÀÚ¿¬¾ð¾î ÀÌÇØ ¹®Á¦¿¡¼­ ÃÖ±Ù ³ôÀº ¼º´ÉÀ» º¸ÀδÙ. ÇÏÁö¸¸ ½ÉÃþ ÇнÀ ¸ðµ¨ÀÇ ±Þ°ÝÇÑ ¹ßÀü ¾ç»ó¿¡ ºñÇØ, ÀÌÀÇ µ¿ÀÛ ¹æ½ÄÀº »ó´ëÀûÀ¸·Î Àß ¾Ë·ÁÁ® ÀÖÁö ¾Ê´Ù. ½ÉÃþ ÇнÀ ¸ðµ¨À» Çؼ®ÇÏ´Â ¹æ¹ýÀ¸·Î ¸ðµ¨ÀÇ ¿¹Ãø °ª°ú ½ÇÁ¦ °ªÀÌ ¾ó¸¶³ª ÀÏÄ¡ÇÏ´ÂÁö¸¦ ÃøÁ¤ÇÏ´Â ¸ðµ¨ÀÇ º¸Á¤ÀÌ ÀÖ´Ù. º» ¿¬±¸´Â Çѱ¹¾î ±â¹ÝÀÇ »çÀü ÇнÀµÈ ½ÉÃþ ÇнÀ ¸ðµ¨ÀÇ Çؼ®À» À§ÇØ ¸ðµ¨ÀÇ º¸Á¤À» ¼öÇàÇß´Ù. ±×¸®°í »çÀü ÇнÀµÈ Çѱ¹¾î ¾ð¾î ¸ðµ¨ÀÌ ¹®ÀåÀÌ ³»Æ÷ÇÏ´Â ¾Ö¸Å¼ºÀ» Àß ÆľÇÇÏ´ÂÁöÀÇ ¿©ºÎ¸¦ È®ÀÎÇÏ°í, ¿ÏÈ­ ±â¹ýµéÀ» Àû¿ëÇÏ¿© ¹®ÀåÀÇ ¾Ö¸Å¼ºÀ» È®½Å ¼öÁØÀ» ÅëÇØ Á¤·®ÀûÀ¸·Î Ãâ·ÂÇÒ ¼ö ÀÖµµ·Ï Çß´Ù. ¶ÇÇÑ Çѱ¹¾îÀÇ ¹®¹ýÀû Ư¡À¸·Î ÀÎÇÑ ¹®ÀåÀÇ ÀÇ¹Ì º¯È­¸¦ ¸ðµ¨ º¸Á¤ °üÁ¡¿¡¼­ Æò°¡ÇÏ¿© Çѱ¹¾îÀÇ ¹®¹ýÀû Ư¡À» »çÀü ÇнÀµÈ ¾ð¾î ¸ðµ¨ÀÌ Àß ÀÌÇØÇÏ°í ÀÖ´ÂÁö¸¦ Á¤·®ÀûÀ¸·Î È®ÀÎÇß´Ù.
¿µ¹®³»¿ë
(English Abstract)
The development of deep learning models has continuously demonstrated performance beyond humans reach in various tasks such as computer vision and natural language understanding tasks. In particular, pre-trained Transformer models have recently shown remarkable performance in natural language understanding problems such as question answering (QA) tasks and dialogue tasks. However, despite the rapid development of deep learning models such as Transformer-based models, the underlying mechanisms of action remain relatively unknown. As a method of analyzing deep learning models, calibration of models measures the extent of matching of the predicted value of the model (confidence) with the actual value (accuracy). Our study aims at interpreting pre-trained Korean language models based on calibration. In particular, we have analyzed whether pre-trained Korean language models can capture ambiguities in sentences and applied the smoothing methods to quantitatively measure such ambiguities with confidence. In addition, in terms of calibration, we have evaluated the capability of pre-trained Korean language models in identifying grammatical characteristics in the Korean language, which affect semantic changes in the Korean sentences.
Å°¿öµå(Keyword) ¾ð¾î ¸ðµ¨   ¾Ö¸Å¼º   ¸ðµ¨ÀÇ º¸Á¤   º¸Á¶»ç   ºÎ»ç   language model   ambiguity   calibration   postpositional particles   adverbs  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå