• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸Ã³¸®ÇÐȸ ³í¹®Áö > Á¤º¸Ã³¸®ÇÐȸ ³í¹®Áö ¼ÒÇÁÆ®¿þ¾î ¹× µ¥ÀÌÅÍ °øÇÐ

Á¤º¸Ã³¸®ÇÐȸ ³í¹®Áö ¼ÒÇÁÆ®¿þ¾î ¹× µ¥ÀÌÅÍ °øÇÐ

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) KoEPT ±â¹Ý Çѱ¹¾î ¼öÇÐ ¹®ÀåÁ¦ ¹®Á¦ µ¥ÀÌÅÍ ºÐ·ù ³­µµ ºÐ¼®
¿µ¹®Á¦¸ñ(English Title) Analyzing Korean Math Word Problem Data Classification Difficulty Level Using the KoEPT Model
ÀúÀÚ(Author) Rhim Sangkyu   Ki Kyung Seo   Kim Bugeun   Gweon Gahgene   ÀÓ»ó±Ô   ±â°æ¼­   ±èºÎ±Ù   ±Ç°¡Áø  
¿ø¹®¼ö·Ïó(Citation) VOL 11 NO. 08 PP. 0315 ~ 0324 (2022. 08)
Çѱ۳»¿ë
(Korean Abstract)
ÀÌ ³í¹®¿¡¼­´Â ÀÚ¿¬¾î·Î ±¸¼ºµÈ ¼öÇÐ ¹®ÀåÁ¦ ¹®Á¦ ÀÚµ¿ Ç®ÀÌÇϱâ À§ÇÑ Transformer ±â¹ÝÀÇ »ý¼º ¸ðµ¨ÀÎ KoEPT¸¦ Á¦¾ÈÇÑ´Ù. ¼öÇÐ ¹®ÀåÁ¦ ¹®Á¦´Â ÀÏ»ó »óȲÀ» ¼öÇÐÀû Çü½ÄÀ¸·Î Ç¥ÇöÇÑ ÀÚ¿¬¾î ¹®Á¦ÀÌ´Ù. ¹®ÀåÁ¦ ¹®Á¦ Ç®ÀÌ ±â¼úÀº ÇÔÃàµÈ ³í¸®¸¦ ÀΰøÁö´ÉÀÌ ÆľÇÇØ¾ß ÇÑ´Ù´Â ¿ä±¸»çÇ×À» Áö³à ÃÖ±Ù ÀΰøÁö´ÉÀÇ ¾ð¾î ÀÌÇØ ´É·ÂÀ» ÁõÁøÇϱâ À§ÇØ ±¹³»¿Ü¿¡¼­ ´Ù¾çÇÏ°Ô ¿¬±¸µÇ°í ÀÖ´Ù. Çѱ¹¾îÀÇ °æ¿ì ¹®Á¦¸¦ À¯ÇüÀ¸·Î ºÐ·ùÇÏ¿© Ç®ÀÌÇÏ´Â ±â¹ýµéÀÌ ÁÖ·Î ½ÃµµµÇ¾úÀ¸³ª, ÀÌ·¯ÇÑ ±â¹ýÀº ´Ù¾çÇÑ ¼ö½ÄÀ» Æ÷°ýÇÏ¿© ºÐ·ù ³­µµ°¡ ³ôÀº µ¥ÀÌÅͼ¿¡ Àû¿ëÇϱ⠾î·Æ´Ù´Â ÇÑ°è°¡ ÀÖ´Ù. º» ³í¹®Àº ÀÌ¿¡ ´ëÇØ ¡®½Ä¡¯ÅäÅ«°ú Æ÷ÀÎÅÍ ³×Æ®¿öÅ©¸¦ »ç¿ëÇÏ´Â KoEPT ¸ðµ¨À» »ç¿ëÇß´Ù. ÀÌ ¸ðµ¨ÀÇ ¼º´ÉÀ» ÃøÁ¤Çϱâ À§ÇØ ÇöÁ¸ÇÏ´Â Çѱ¹¾î ¼öÇÐ ¹®ÀåÁ¦ ¹®Á¦ µ¥ÀÌÅͼÂÀÎ IL, CC, ALG514ÀÇ ºÐ·ù ³­µµ¸¦ ÃøÁ¤ÇÑ ÈÄ 5°ã ±³Â÷ °ËÁõ ±â¹ýÀ» »ç¿ëÇÏ¿© KoEPTÀÇ ¼º´ÉÀ» Æò°¡ÇÏ¿´´Ù. Æò°¡¿¡ »ç¿ëµÈ Çѱ¹¾î µ¥ÀÌÅͼµ鿡 ´ëÇÏ¿©, KoEPT´Â CC¿¡¼­´Â ±âÁ¸ ÃÖ°í ¼º´É°ú ´ëµîÇÑ 99.1%, IL°ú ALG514¿¡¼­ °¢°¢ 89.3%, 80.5%·Î »õ·Î¿î ÃÖ°í ¼º´ÉÀ» ¾ò¾ú´Ù. »Ó¸¸ ¾Æ´Ï¶ó Æò°¡ °á°ú KoEPT´Â ºÐ·ù ³­µµ°¡ ³ôÀº µ¥ÀÌÅͼ¿¡ ´ëÇØ »ó´ëÀûÀ¸·Î °³¼±µÈ ¼º´ÉÀ» º¸¿´´Ù. KoEPT°¡ ºÐ·ù ³­µµÀÇ ¿µÇâÀ» ´ú ¹ÞÀ¸¸ç ÁÁÀº ¼º´ÉÀ» ¾ò°Ô µÈ ÀÌÀ¯¸¦ ¡®½Ä¡¯ÅäÅ«°ú Æ÷ÀÎÅÍ ³×Æ®¿öÅ© ¶§¹®À̶ó´Â °ÍÀ» ablation study¸¦ ÅëÇؼ­ ¹àÇû´Ù.
¿µ¹®³»¿ë
(English Abstract)
In this paper, we propose KoEPT, a Transformer-based generative model for automatic math word problems solving. A math word problem written in human language which describes everyday situations in a mathematical form. Math word problem solving requires an artificial intelligence model to understand the implied logic within the problem. Therefore, it is being studied variously across the world to improve the language understanding ability of artificial intelligence. In the case of the Korean language, studies so far have mainly attempted to solve problems by classifying them into templates, but there is a limitation in that these techniques are difficult to apply to datasets with high classification difficulty. To solve this problem, this paper used the KoEPT model which uses ¡®expression¡¯ tokens and pointer networks. To measure the performance of this model, the classification difficulty scores of IL, CC, and ALG514, which are existing Korean mathematical sentence problem datasets, were measured, and then the performance of KoEPT was evaluated using 5-fold cross-validation. For the Korean datasets used for evaluation, KoEPT obtained the state-of-the-art(SOTA) performance with 99.1% in CC, which is comparable to the existing SOTA performance, and 89.3% and 80.5% in IL and ALG514, respectively. In addition, as a result of evaluation, KoEPT showed a relatively improved performance for datasets with high classification difficulty. Through an ablation study, we uncovered that the use of the ¡®expression¡¯ tokens and pointer networks contributed to KoEPT¡¯s state of being less affected by classification difficulty while obtaining good performance.
Å°¿öµå(Keyword) Math Word Problems   Generation Model   Transformer   Pointer Network   Classification Difficulty   ¼öÇÐ ¹®ÀåÁ¦ ¹®Á¦   »ý¼º ¸ðµ¨   Æ®·»½ºÆ÷¸Ó   Æ÷ÀÎÅÍ ³×Æ®¿öÅ©   ºÐ·ù ³­µµ  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå