• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸Ã³¸®ÇÐȸ ³í¹®Áö > Á¤º¸Ã³¸®ÇÐȸ ³í¹®Áö ÄÄÇ»ÅÍ ¹× Åë½Å½Ã½ºÅÛ

Á¤º¸Ã³¸®ÇÐȸ ³í¹®Áö ÄÄÇ»ÅÍ ¹× Åë½Å½Ã½ºÅÛ

Current Result Document : 1 / 102   ´ÙÀ½°Ç ´ÙÀ½°Ç

ÇѱÛÁ¦¸ñ(Korean Title) ¾ÖµåȤ ¼¾¼­ ³×Æ®¿öÅ© ¼ö¸í ¿¬ÀåÀ» À§ÇÑ Q-·¯´× ±â¹Ý ¿¡³ÊÁö ±Õµî ¼Òºñ ¶ó¿ìÆà ÇÁ·ÎÅäÄÝ ±â¹ý
¿µ¹®Á¦¸ñ(English Title) Equal Energy Consumption Routing Protocol Algorithm Based on Q-Learning for Extending the Lifespan of Ad-Hoc Sensor Network
ÀúÀÚ(Author) ±è±â»ó   ±è½Â¿í   Kim Ki Sang   Kim Sung Wook  
¿ø¹®¼ö·Ïó(Citation) VOL 10 NO. 10 PP. 0269 ~ 0276 (2021. 10)
Çѱ۳»¿ë
(Korean Abstract)
ÃÖ±Ù ½º¸¶Æ® ¼¾¼­´Â ´Ù¾çÇÑ È¯°æ¿¡¼­ »ç¿ëµÇ°í ÀÖÀ¸¸ç, ¾ÖµåȤ ¼¾¼­ ³×Æ®¿öÅ© (ASN) ±¸Çö¿¡ ´ëÇÑ ¿¬±¸°¡ È°¹ßÇÏ°Ô ÁøÇàµÇ°í ÀÖ´Ù. ±×·¯³ª ±âÁ¸ ¼¾¼­ ³×Æ®¿öÅ© ¶ó¿ìÆà ¾Ë°í¸®ÁòÀº ƯÁ¤ Á¦¾î ¹®Á¦¿¡ ÃÊÁ¡À» ¸ÂÃ߸ç ASN ÀÛ¾÷¿¡ Á÷Á¢ Àû¿ëÇÒ ¼ö ¾ø´Â ¹®Á¦Á¡ÀÌ ÀÖ´Ù. º» ³í¹®¿¡¼­´Â Q-learning ±â¼úÀ» ÀÌ¿ëÇÑ »õ·Î¿î ¶ó¿ìÆà ÇÁ·ÎÅäÄÝÀ» Á¦¾ÈÇϴµ¥, Á¦¾ÈµÈ Á¢±Ù ¹æ½ÄÀÇ ÁÖ¿ä °úÁ¦´Â ±ÕÇü ÀâÈù ½Ã½ºÅÛ ¼º´ÉÀ» È®º¸Çϸ鼭 È¿À²ÀûÀÎ ¿¡³ÊÁö ÇÒ´çÀ» ÅëÇØ ASNÀÇ ¼ö¸íÀ» ¿¬ÀåÇÏ´Â °ÍÀÌ´Ù. Á¦¾ÈµÈ ¹æ¹ýÀÇ Æ¯Â¡Àº ´Ù¾çÇÑ È¯°æÀû ¿äÀÎÀ» °í·ÁÇÏ¿© Q-learning È¿°ú¸¦ ³ôÀ̸ç, ƯÈ÷ °¢ ³ëµå´Â ÀÎÁ¢ ³ëµåÀÇ Q °ªÀ» ÀÚü Q Å×ÀÌºí¿¡ ÀúÀåÇÏ¿© µ¥ÀÌÅÍ Àü¼ÛÀÌ ½ÇÇàµÉ ¶§¸¶´Ù Q °ªÀÌ ¾÷µ¥ÀÌÆ®µÇ°í ´©ÀûµÇ¾î ÃÖÀûÀÇ ¶ó¿ìÆà °æ·Î¸¦ ¼±ÅÃÇÏ´Â °ÍÀÌ´Ù. ½Ã¹Ä·¹ÀÌ¼Ç °á°ú Á¦¾ÈµÈ ¹æ¹ýÀÌ ¿¡³ÊÁö È¿À²ÀûÀÎ ¶ó¿ìÆà °æ·Î¸¦ ¼±ÅÃÇÒ ¼ö ÀÖÀ¸¸ç ±âÁ¸ ASN ¶ó¿ìÆà ÇÁ·ÎÅäÄÝ¿¡ ºñÇØ ¿ì¼öÇÑ ³×Æ®¿öÅ© ¼º´ÉÀ» ¾òÀ» ¼ö ÀÖÀ½À» È®ÀÎÇÏ¿´´Ù.
¿µ¹®³»¿ë
(English Abstract)
Recently, smart sensors are used in various environments, and the implementation of ad-hoc sensor networks (ASNs) is a hot research topic. Unfortunately, traditional sensor network routing algorithms focus on specific control issues, and they can¡¯t be directly applied to the ASN operation. In this paper, we propose a new routing protocol by using the Q-learning technology, Main challenge of proposed approach is to extend the life of ASNs through efficient energy allocation while obtaining the balanced system performance. The proposed method enhances the Q-learning effect by considering various environmental factors. When a transmission fails, node penalty is accumulated to increase the successful communication probability. Especially, each node stores the Q value of the adjacent node in its own Q table. Every time a data transfer is executed, the Q values are updated and accumulated to learn to select the optimal routing route. Simulation results confirm that the proposed method can choose an energy-efficient routing path, and gets an excellent network performance compared with the existing ASN routing protocols.
Å°¿öµå(Keyword) °­È­ÇнÀ   Q-·¯´×   ¾ÖµåȤ ¼¾¼­ ³×Æ®¿öÅ©   ¿¡³ÊÁö ¼Òºñ   Reinforcement Learning   Q-Learning   Ad-Hoc Sensornetwork   Energy Consumption  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå