• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸°úÇÐȸ ³í¹®Áö > Á¤º¸°úÇÐȸ ÄÄÇ»ÆÃÀÇ ½ÇÁ¦ ³í¹®Áö (KIISE Transactions on Computing Practices)

Á¤º¸°úÇÐȸ ÄÄÇ»ÆÃÀÇ ½ÇÁ¦ ³í¹®Áö (KIISE Transactions on Computing Practices)

Current Result Document : 280 / 281

ÇѱÛÁ¦¸ñ(Korean Title) µö ·¯´× ÈÆ·Ã ½Ã°£ °³¼±À» À§ÇÑ ¾²·¹µå ±â¹Ý ¼ÅÇøµ ±â¹ý
¿µ¹®Á¦¸ñ(English Title) Thread-based Shuffling Scheme for Improving the Training Time of Deep Learning
ÀúÀÚ(Author) ÃÖÁø¼­   °­µ¿Çö   Jinseo Choi   Donghyun Kang  
¿ø¹®¼ö·Ïó(Citation) VOL 28 NO. 02 PP. 0075 ~ 0080 (2022. 02)
Çѱ۳»¿ë
(Korean Abstract)
µö ·¯´×(Deep Learning)ÀÌ ±âÁ¸ÀÇ ¹®Á¦ ÇØ°á¿¡ »õ·Î¿î ¹æÇâÀ» Á¦½ÃÇÔÀ¸·Î½á, ´Ù¾çÇÑ È¯°æ¿¡¼­ µö ·¯´×À» Àû¿ëÇÏ°í ÀÖ´Ù. ÀÌ¿¡ µû¶ó, µö ·¯´× ÇÁ·¹ÀÓ¿öÅ©µµ Á¡Â÷ ¹ßÀüÇÏ°í ÀÖ´Â Ãß¼¼ÀÌ´Ù. ±×·¯³ª, µö ·¯´×¿¡¼­ ´ë¿ë·®ÀÇ µ¥ÀÌÅ͸¦ ÇнÀÇϱâ À§Çؼ­ ÇÊ¿äÇÑ µ¥ÀÌÅÍ Ã³¸® °úÁ¤Àº ¿©ÀüÈ÷ °³¼±ÀÇ ¿©Áö¸¦ ³²°ÜµÎ°í ÀÖ´Ù. ÀÌ¿¡, º» ³í¹®¿¡¼­´Â µö ·¯´×ÀÇ ÇнÀ °úÁ¤À» ºü¸£°Ô ÁøÇàÇϱâ À§ÇÑ »õ·Î¿î µ¥ÀÌÅÍ Ã³¸® ¹æ¹ýÀÎ ¾²·¹µå ±â¹Ý ¼ÅÇøµ ±â¹ýÀ» Á¦¾ÈÇÑ´Ù. Á¦¾È ±â¹ýÀº ¸ÖƼ ¾²·¹µå¸¦ ÀÌ¿ëÇÏ¿© µö ·¯´× ÇнÀ¿¡ ÇÊ¿äÇÑ µ¥ÀÌÅ͸¦ ÀÓÀÇÀÇ ¼ø¼­·Î Àç Á¤·ÄÇÔÀ¸·Î½á CPUÀÇ »ç¿ë·üÀ» Çâ»ó½ÃŲ´Ù. ½ÇÇèÀ» ÅëÇØ ½ÇÇèÀ» ÅëÇØ ¿ì¸®´Â Á¦¾È ±â¹ýÀÌ ±âÁ¸ ±â¹ý ´ëºñ ¼ÅÇøµ ½Ã°£À» ÃÖ´ë 50.7% ±×¸®°í Àüü ÈÆ·Ã ½Ã°£À» ÃÖ´ë 13.6% °¨¼Ò½Ãų ¼ö ÀÖ´Ù´Â »ç½ÇÀ» È®ÀÎÇÏ¿´´Ù. ¶ÇÇÑ, ¾²·¹µå ´ÜÀ§·Î µ¥ÀÌÅ͸¦ ºÐÇÒÇÔÀ¸·Î½á ¹ß»ýÇÒ ¼ö ÀÖ´Â Á¤È®µµ ¹®Á¦¸¦ È®ÀÎÇÔÀ¸·Î½á, Á¦¾È ±â¹ýÀÌ Á¤È®µµ¿¡ ¿µÇâÀ» ¹ÌÄ¡Áö ¾Ê´Â´Ù´Â »ç½ÇÀ» È®ÀÎÇÏ¿´´Ù.
¿µ¹®³»¿ë
(English Abstract)
Deep learning is being applied to various environments as it is working towards a new direction that solves many traditional problems. Therefore, there is a trend that the deep learning framework is gradually developing. However, the data processing process to learn the amount of large data in deep learning has still left room for optimization. In this paper, we propose a thread-based shuffling scheme to speed up the learning process of deep learning. The proposed scheme enhances CPU utilization by ordering data in a random way with multi-threads. Through evaluation, we confirmed that our scheme can reduce the shuffling time by up to 50.7% as well as the total training time by up to 13.6% compared to the conventional approach. Additionally, we also validated the fact that the proposed scheme has no effect on the accuracy issue caused by splitting a range of data into each thread.
Å°¿öµå(Keyword) µö ·¯´×   ¼ÅÇà  ¸ÖƼ ¾²·¹µå   CPU »ç¿ë·ü   deep learning   shuffle   multi-thread   CPU utilization  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå