• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

¿µ¹® ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ¿µ¹® ³í¹®Áö > JIPS (Çѱ¹Á¤º¸Ã³¸®ÇÐȸ)

JIPS (Çѱ¹Á¤º¸Ã³¸®ÇÐȸ)

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) Multi-channel Long Short-Term Memory with Domain Knowledge for Context Awareness and User Intention
¿µ¹®Á¦¸ñ(English Title) Multi-channel Long Short-Term Memory with Domain Knowledge for Context Awareness and User Intention
ÀúÀÚ(Author) Dan-Bi Cho   Hyun-Young Lee   Seung-Shik Kang  
¿ø¹®¼ö·Ïó(Citation) VOL 17 NO. 05 PP. 0867 ~ 0878 (2021. 10)
Çѱ۳»¿ë
(Korean Abstract)
¿µ¹®³»¿ë
(English Abstract)
In context awareness and user intention tasks, dataset construction is expensive because specific domain data are required. Although pretraining with a large corpus can effectively resolve the issue of lack of data, it ignores domain knowledge. Herein, we concentrate on data domain knowledge while addressing data scarcity and accordingly propose a multi-channel long short-term memory (LSTM). Because multi-channel LSTM integrates pretrained vectors such as task and general knowledge, it effectively prevents catastrophic forgetting between vectors of task and general knowledge to represent the context as a set of features. To evaluate the proposed model with reference to the baseline model, which is a single-channel LSTM, we performed two tasks: voice phishing with context awareness and movie review sentiment classification. The results verified that multichannel LSTM outperforms single-channel LSTM in both tasks. We further experimented on different multichannel LSTMs depending on the domain and data size of general knowledge in the model and confirmed that the effect of multi-channel LSTM integrating the two types of knowledge from downstream task data and raw data to overcome the lack of data.
Å°¿öµå(Keyword) Context Awareness   Domain Adaptation   Multi-channel LSTM   User Intention  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå