Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)
ÇѱÛÁ¦¸ñ(Korean Title) |
ÀÚ¿¬¾î Ã߷п¡¼ÀÇ ±³Â÷ °ËÁõ ¾Ó»óºí ±â¹ý |
¿µ¹®Á¦¸ñ(English Title) |
Cross-Validated Ensemble Methods in Natural Language Inference |
ÀúÀÚ(Author) |
¾ç±â¼ö
Ȳż±
¿Àµ¿¼®
¹ÚÂùÁØ
ÀÓÈñ¼®
Kisu Yang
Taesun Whang
Dongsuk Oh
Chanjun Park
Heuiseok Lim)
|
¿ø¹®¼ö·Ïó(Citation) |
VOL 48 NO. 02 PP. 0154 ~ 0159 (2021. 02) |
Çѱ۳»¿ë (Korean Abstract) |
¾Ó»óºí ±â¹ýÀº ¿©·¯ ¸ðµ¨À» Á¾ÇÕÇÏ¿© ÃÖÁ¾ ÆÇ´ÜÀ» »êÃâÇÏ´Â ±â°è ÇнÀ ±â¹ýÀ¸·Î¼ µö·¯´× ¸ðµ¨ÀÇ ¼º´É Çâ»óÀ» º¸ÀåÇÑ´Ù. ÇÏÁö¸¸ ´ëºÎºÐÀÇ ±â¹ýÀº ¾Ó»óºí¸¸À» À§ÇÑ Ãß°¡ÀûÀÎ ¸ðµ¨ ¶Ç´Â º°µµÀÇ ¿¬»êÀ» ¿ä±¸ÇÑ´Ù. ÀÌ¿¡ ¿ì¸®´Â ¾Ó»óºí ±â¹ýÀ» ±³Â÷ °ËÁõ ¹æ¹ý°ú °áÇÕÇÏ¿© ¾Ó»óºí ¿¬»êÀ» À§ÇÑ ºñ¿ëÀ» ÁÙÀ̸ç ÀϹÝÈ ¼º´ÉÀ» ³ôÀÌ´Â ±³Â÷ °ËÁõ ¾Ó»óºí ±â¹ýÀ» Á¦¾ÈÇÑ´Ù. º» ±â¹ýÀÇ È¿°ú¸¦ ÀÔÁõÇϱâ À§ÇØ MRPC, RTE µ¥ÀÌÅͼ°ú BiLSTM, CNN, ELMo, BERT ¸ðµ¨À» ÀÌ¿ëÇÏ¿© ±âÁ¸ ¾Ó»óºí ±â¹ýº¸´Ù Çâ»óµÈ ¼º´ÉÀ» º¸ÀδÙ. Ãß°¡·Î ±³Â÷ °ËÁõ¿¡¼ ºñ·ÔÇÑ ÀϹÝÈ ¿ø¸®¿Í ±³Â÷ °ËÁõ º¯¼ö¿¡ µû¸¥ ¼º´É º¯È¿¡ ´ëÇÏ¿© ³íÀÇÇÑ´Ù.
|
¿µ¹®³»¿ë (English Abstract) |
An ensemble method is a machine learning technique that combines several models to make the final prediction, which guarantees improved performance for deep learning models. However, most techniques require additional models or operations only for an ensemble. To address this problem, we propose a cross-validated ensemble method for reducing the costs of ensemble operations with cross-validation and for improving the generalization effects with the ensemble. To demonstrate the effectiveness of the proposed method, we show the improved performances of the proposed ensemble over the previous ensemble methods using the BiLSTM, CNN, ELMo and BERT models on the MRPC and RTE datasets. We also discuss the generalization mechanism involved in cross-validation along with the performance changes caused by the hyper-parameter of cross-validation.
|
Å°¿öµå(Keyword) |
¾Ó»óºí
µö·¯´×
ÀÚ¿¬¾îó¸®
ÀÚ¿¬¾î Ãß·Ð
ensemble
deep learning
natural language processing
natural language inference
|
ÆÄÀÏ÷ºÎ |
PDF ´Ù¿î·Îµå
|