• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ÇÐȸÁö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ÇÐȸÁö > µ¥ÀÌÅͺ£À̽º ¿¬±¸È¸Áö(SIGDB)

µ¥ÀÌÅͺ£À̽º ¿¬±¸È¸Áö(SIGDB)

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) ÀÓ»óÀÇÇÐ ¾ð¾îÀÌÇØ ¼º´ÉÇâ»óÀ» À§ÇÑ Discharge Summary CRPT ¸ðµ¨ ¿¬±¸
¿µ¹®Á¦¸ñ(English Title) Discharge Summary CRPT Model using Contrastive Loss
ÀúÀÚ(Author) ±èŸ®   ÇãÁöÈ£   ±è»ó¿í   Taeri Kim   Jiho Heo   Sang-Wook Kim   Á¤À¯Ã¤   Yuchae Jung  
¿ø¹®¼ö·Ïó(Citation) VOL 39 NO. 02 PP. 0076 ~ 0084 (2023. 08)
Çѱ۳»¿ë
(Korean Abstract)
BERT ¸ðµ¨Àº ÀÚ¿¬¾îÀÌÇØ ÀÛ¾÷¿¡¼­ ³ôÀº ¼º´ÉÀ» ³ªÅ¸³ÂÀ¸¸ç ÀÇ»ý¸í ºÐ¾ß¸¦ Æ÷ÇÔÇÑ ´Ù¾çÇÑ µµ¸ÞÀο¡¼­ Àû¿ëµÇ°í ÀÖ´Â »çÀüÇнÀ ¾ð¾î¸ðµ¨ÀÌ´Ù. ÀÓ»óÀÇÇп¡¼­ ȯÀÚÀÇ ÀÇ·á±â·Ï°ú ÀÓ»ó Á¤º¸¸¦ Á¤È®ÇÏ°Ô ÀÌÇØÇϱâ À§Çؼ­´Â Àü¹®¿ë¾î¸¦ Æ÷ÇÔÇÑ ÀÚ¿¬¾î·Î ±¸¼ºµÈ ¹®Àåµé °£ÀÇ Àǹ̷ÐÀû °ü°è¸¦ Á¤È®ÇÏ°Ô Ãß·ÐÇÏ´Â °ÍÀÌ ¸Å¿ì Áß¿äÇÏ´Ù. ±×·¯³ª BERT ¸ðµ¨À» ÀÓ»óÀÇÇÐ ºÐ¾ßÀÇ ´ë¿ë·® µ¥ÀÌÅÍ·Î »çÀüÇнÀ ½ÃÅ°°í ¹®Àå ¼öÁØÀÇ ÀÇ¹Ì ÀÌÇØ Á¤È®µµ¸¦ Çâ»ó½ÃÅ°±â À§ÇÑ ¿¬±¸´Â ¾ÆÁ÷ È°¹ßÇÏ°Ô ÀÌ·ç¾îÁöÁö ¾Ê°í ÀÖ´Ù. ÀÌ·¯ÇÑ ¹®Á¦¸¦ ÇØ°áÇϱâ À§ÇØ º» ¿¬±¸¿¡¼­´Â ´ëÁ¶Àû Ç¥»ó »çÀü ÈÆ·Ã ¹æ¹ýÀ» ÀÌ¿ëÇØ BERT ¸ðµ¨ÀÇ ¼º´ÉÀ» Çâ»ó½ÃÅ°´Â ¹æ¹ýÀ» Á¦¾ÈÇÏ°íÀÚ ÇÑ´Ù. Á¦¾ÈµÈ Discharge Summary CRPT ¸ðµ¨Àº Åð¿ø¿ä¾à ±â·ÏÀ¸·Î BERT ¸ðµ¨À» »çÀü ÈƷýÃÅ°´Â °úÁ¤¿¡¼­ ´ÙÀ½ 2°¡Áö¸¦ °³¼±ÇÏ¿© ÀÇÇоð¾îÀÌÇØ ¼º´ÉÀ» Çâ»ó½ÃÄ×´Ù. ´ÙÀ½ ¹®Àå ¿¹Ãø ´Ü°èÀÇ ±³Â÷ ¿£Æ®·ÎÇÇ ¼Õ½ÇÀ» ´ëÁ¶ ¼Õ½Ç·Î ´ëüÇÔÀ¸·Î½á ¹®Àå °£ ¹®¸ÆÀû ÀǹÌÃß·Ð Á¤È®µµ¸¦ Çâ»ó½ÃÄ×´Ù. ¶ÇÇÑ, ±âÁ¸ ¹«ÀÛÀ§ ¸¶½ºÅ·À» ´Ü¾î Àüü ¸¶½ºÅ· ¹æ¹ýÀ¸·Î °³¼±ÇØ ÀÓ»óÀÇÇÐ ÅؽºÆ®¿¡ ´ëÇÑ ÀÚ¿¬¾îÀÌÇØ ¼º´ÉÀÌ Çâ»óµÇ´ÂÁö È®ÀÎÇÏ¿´´Ù. Á¦¾ÈµÈ ¸ðµ¨Àº BLUE º¥Ä¡¸¶Å© µ¥ÀÌÅͼÂ(MedNLI, BioSSES)À¸·Î °ËÁõÇÑ °á°ú ÀÓ»óÀÇÇÐ ÅؽºÆ®¿¡ ´ëÇÑ ÀÚ¿¬¾îÃß·Ð Á¤È®µµ(Accuracy = 0.825) ¹× ¹®Àå À¯»çµµ(sentence similarity = 0.775)¿¡¼­ ±âÁ¸ BERT ¸ðµ¨¿¡ ºñÇØ ¼º´É Çâ»óÀ» ³ªÅ¸³Â´Ù.
¿µ¹®³»¿ë
(English Abstract)
Recently BERT(Bidirectional Encoder Representation from Transformer) has shown tremendous improvement in performance for various NLP tasks. BERT has been applied to many domains including biomedical field. Especially clinical domain, the semantic relationship between sentences is very important to understand patient¡¯s medical record and health history in physical examination. However, in current Clinical BERT model, the pre-training method is difficult to capture sentence level semantics. To address this problem, we propose a Discharge Summary CRPT(contrastive representations pre-training) model, which can enhance contextual meanings between sentences by replacing cross-entropy loss to contrastive loss in next sentence prediction (NSP) task. Also we tried to improve the performance by changing random masking technique to whole word masking (WWM) for masked language model (MLM). Especially, we focus on enhancing language representations of BERT model by pre-training with Discharge Summaries Notes to optimize in clinical text understanding. We demonstrate that our Discharge Summary CRPT model yields improvements in performance of clinical NLP task with BLUE (Biomedical Language Understanding Evaluation) Benchmark dataset (MedNLI and BioSSES).
Å°¿öµå(Keyword) ÀǾàÇ° Ãßõ; ÀüÀÚ °Ç°­ ±â·Ï(EHR) ±×·¡ÇÁ; °Ç°­ Á¤º¸   medication recommendation;   electronic health records (EHR) graph   health informatic   Contrastive Representation Pre-training   Contrastive Loss   Clinical BERT  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå