Àüü
ÀüÀÚ/Àü±â
Åë½Å
ÄÄÇ»ÅÍ
·Î±×ÀÎ
ȸ¿ø°¡ÀÔ
About Us
ÀÌ¿ë¾È³»
¿¬±¸¹®Çå
±¹³» ³í¹®Áö
¿µ¹® ³í¹®Áö
±¹³» ÇÐȸÁö
Çмú´ëȸ ÇÁ·Î½Ãµù
±¹³» ÇÐÀ§ ³í¹®
³í¹®Á¤º¸
¹é¼
±³À°Á¤º¸
¿¬±¸ ù°ÉÀ½
ÇаúÁ¤º¸
°ÀÇÁ¤º¸
µ¿¿µ»óÁ¤º¸
E-Learning
¿Â¶óÀÎ Àú³Î
½ÉÈÁ¤º¸
¿¬±¸ ¹× ±â¼úµ¿Çâ
Áֿ俬±¸ÅäÇÈ
ÁÖ¿ä°úÁ¦ ¹× ±â°ü
Çؿܱâ°ü °ü·ÃÀÚ·á
¹ÙÀÌ¿À Á¤º¸±â¼ú
ÁÖ¿ä Archive Site
Æ÷Ä¿½ºiN
¿¬±¸ÀÚ Á¤º¸
¶óÀÌ¡½ºÅ¸
ÆÄ¿öiNÅͺä
¼¼ÁßÇÑ
¿¬±¸ÀÚ·á
¹®ÀÚ DB
¿ë¾î»çÀü
¾Ë¸²¸¶´ç
ºÎ½Ç ÇмúÈ°µ¿ ¿¹¹æ
³í¹®¸ðÁý
´ëȸ¾È³»
What's New
¿¬±¸ºñÁ¤º¸
±¸ÀÎÁ¤º¸
°øÁö»çÇ×
CSERIC ±¤Àå
Post-Conference
¿¬±¸ÀÚ Ä«Æä
ÀÚÀ¯°Ô½ÃÆÇ
Q&A
´Ý±â
»çÀÌÆ®¸Ê
¿¬±¸¹®Çå
±¹³» ³í¹®Áö
¿µ¹® ³í¹®Áö
±¹³» ÇÐȸÁö
Çмú´ëȸ ÇÁ·Î½Ãµù
±¹³» ÇÐÀ§ ³í¹®
³í¹®Á¤º¸
¹é¼
±³À°Á¤º¸
¿¬±¸ ù°ÉÀ½
ÇаúÁ¤º¸
°ÀÇÁ¤º¸
µ¿¿µ»óÁ¤º¸
E-Learning
¿Â¶óÀÎ Àú³Î
½ÉÈÁ¤º¸
¿¬±¸ ¹× ±â¼úµ¿Çâ
Áֿ俬±¸ÅäÇÈ
ÁÖ¿ä°úÁ¦ ¹× ±â°ü
Çؿܱâ°ü °ü·ÃÀÚ·á
¹ÙÀÌ¿À Á¤º¸±â¼ú
ÁÖ¿ä Archive Site
ÄÄÇ»ÅÍiN
¿¬±¸ÀÚ Á¤º¸
¿¬±¸ÀÚ·á
¹®ÀÚ DB
Ȧ·Î±×·¥ DB
¿ë¾î»çÀü
¾Ë¸²¸¶´ç
ºÎ½Ç ÇмúÈ°µ¿ ¿¹¹æ
³í¹®¸ðÁý
´ëȸ¾È³»
What's New
¿¬±¸ºñ Á¤º¸
±¸ÀÎÁ¤º¸
°øÁö»çÇ×
IT Daily
CSERIC ±¤Àå
Post-Conference
¿¬±¸ÀÚ Ä«Æä
ÀÚÀ¯°Ô½ÃÆÇ
Q&A
¼ºñ½º ¹Ù·Î°¡±â
¼³¹®Á¶»ç
¿¬±¸À±¸®
°ü·Ã±â°ü
Please wait....
¿¬±¸¹®Çå
±¹³» ³í¹®Áö
¿µ¹® ³í¹®Áö
±¹³» ÇÐȸÁö
Çмú´ëȸ ÇÁ·Î½Ãµù
±¹³» ÇÐÀ§ ³í¹®
³í¹®Á¤º¸
¹é¼
±¹³» ³í¹®Áö
Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö >
Çѱ¹Á¤º¸°úÇÐȸ ³í¹®Áö
>
Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)
Á¤º¸°úÇÐȸ³í¹®Áö (Journal of KIISE)
Current Result Document :
3
/ 3
ÀÌÀü°Ç
ÇѱÛÁ¦¸ñ(Korean Title)
Â÷·® ¿§Áö ÄÄÇ»Æà ȯ°æ¿¡¼ °ÈÇнÀ ±â¹ÝÀÇ ¼ºñ½º ¸¶À̱׷¹À̼Ç
¿µ¹®Á¦¸ñ(English Title)
Service Migration Based on Reinforcement Learning in Vehicular Edge Computing
ÀúÀÚ(Author)
¹®¼º¿ø
ÀÓÀ¯Áø
Sungwon Moon
Yujin Lim
¿ø¹®¼ö·Ïó(Citation)
VOL 48 NO. 02 PP. 0243 ~ 0248 (2021. 02)
Çѱ۳»¿ë
(Korean Abstract)
»ç¿ëÀÚ¿¡°Ô ÃÊÀúÁö¿¬ ¹× ½Ç½Ã°£ ¼ºñ½º¸¦ Á¦°øÇÒ ¼ö ÀÖ¾î ¿§Áö ÄÄÇ»ÆÃÀº »ç¹°ÀÎÅͳÝÀ» À̲ø¼ö ÀÖ´Â À¯¸Á ±â¼ú·Î ºÎ»óÇÏ°í ÀÖ´Ù. ÇÏÁö¸¸ »ç¿ëÀÚÀÇ À̵¿¼º°ú ¿§Áö ¼¹öÀÇ Á¦ÇÑÀûÀÎ Ä¿¹ö¸®Áö ¶§¹®¿¡ ¼ºñ½º Áß´Ü°ú QoS ÀúÇϸ¦ ÃÊ·¡ÇÑ´Ù. ±×·¡¼ ²÷±è ¾ø´Â ¼ºñ½º¸¦ º¸ÀåÇϱâ À§ÇØ ¼ºñ½º ¸¶À̱׷¹À̼ÇÀÌ Áß¿äÇÑ À̽´·Î ´Ù·ïÁø´Ù. º» ³í¹®¿¡¼´Â Â÷·® ¿§Áö ÄÄÇ»Æà ȯ°æ¿¡¼ Q-learning °ÈÇнÀ ±â¹ýÀ» »ç¿ëÇÏ¿© ¸¶À̱׷¹À̼ǿ¡ °üÇØ °áÁ¤ÇÏ´Â ¾Ë°í¸®ÁòÀ» Á¦¾ÈÇÑ´Ù. Á¦¾ÈÇÑ ¾Ë°í¸®ÁòÀº Â÷·®ÀÇ À̵¿¿¡ µû¶ó ¸¶À̱׷¹ÀÌ¼Ç ÁøÇà ¿©ºÎ¿Í ´ë»óÀ» °áÁ¤ÇÏ´Â °ÍÀÌ´Ù. Á¦¾ÈÇÑ ¾Ë°í¸®ÁòÀÇ ¸ñÀûÀº Áö¿¬ Á¦¾àÁ¶°ÇÀ» ÃæÁ·ÇÏ¸ç ½Ã½ºÅÛ ºñ¿ëÀ» ÃÖ¼ÒÈÇÏ´Â °ÍÀÌ´Ù. º» ³í¹®¿¡¼´Â Á¦¾È ¾Ë°í¸®ÁòÀÇ ¼º´É ºñ±³¸¦ ÅëÇÏ¿© ±âÁ¸ ±â¹ý¿¡ ºñÇÏ¿© ¸¶À̱׷¹ÀÌ¼Ç ÁøÇà ¿©ºÎ¿Í ´ë»ó °áÁ¤ÀÇ Ãø¸é¿¡¼ ´õ ³ªÀº ¼º´ÉÀ» º¸ÀÓÀ» È®ÀÎÇÏ¿´´Ù.
¿µ¹®³»¿ë
(English Abstract)
As edge computing can provide low latency and real-time services, it is emerging as a promising technology that can lead the Internet of things(IoT). However, user mobility and limited coverage of edge computing result in service interruption and reduce Quality of Service(QoS). Thus, service migration is considered an important issue to guarantee seamless service. In this paper, a migration decision algorithm is proposed using Q-learning, as a reinforcement learning method in the vehicular edge computing environment. The proposed algorithm decides whether or not to migrate and where to migrate in order to meet delay constraint and minimize system cost. In the performance evaluation, we compared propsed algorithm with other algorithms in terms of deciding whether or not to migrate and where to migrate, and our proposed algorithm shows better performance compared to the other algorithms.
Å°¿öµå(Keyword)
Â÷·® ¿§Áö ÄÄÇ»ÆÃ
¼ºñ½º ¸¶À̱׷¹À̼Ç
°ÈÇнÀ
Q-learning
vehicular edge computing
service migration
reinforcement learning
Q-learning
ÆÄÀÏ÷ºÎ
PDF ´Ù¿î·Îµå
¸ñ·Ï
Copyright(c)
Computer Science Engineering Research Information Center
. All rights reserved.