• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

Çмú´ëȸ ÇÁ·Î½Ãµù

Ȩ Ȩ > ¿¬±¸¹®Çå > Çмú´ëȸ ÇÁ·Î½Ãµù > Çѱ¹Á¤º¸°úÇÐȸ Çмú´ëȸ > 2016³â µ¿°èÇмú¹ßǥȸ

2016³â µ¿°èÇмú¹ßǥȸ

Current Result Document : 1 / 2   ´ÙÀ½°Ç ´ÙÀ½°Ç

ÇѱÛÁ¦¸ñ(Korean Title) Çѱ¹¾î¸¦ À§ÇÑ °³¼±µÈ ¿öµå ÀÓº£µù
¿µ¹®Á¦¸ñ(English Title) Better Word Embeddings for Korean
ÀúÀÚ(Author) Ã÷³ª·¼ ÀçÀÌ´Ù   À庴Ź   Ceyda Cinarel   Byoung-Tak Zhang  
¿ø¹®¼ö·Ïó(Citation) VOL 43 NO. 02 PP. 0627 ~ 0629 (2016. 12)
Çѱ۳»¿ë
(Korean Abstract)
Vector representations of words that capture semantic and syntactic information accurately is critical for the performance of models that use these vectors as inputs. Algorithms that only use the surrounding context at the word level ignore the subword level relationships which carry important meaning especially for languages that are highly inflected such as Korean. In this paper we compare the word vectors generated by incorporating different levels of subword information, through visualization using t-SNE, for small sized Korean data.
¿µ¹®³»¿ë
(English Abstract)
Å°¿öµå(Keyword)
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå