• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸Ã³¸®ÇÐȸ ³í¹®Áö > Á¤º¸Ã³¸®ÇÐȸ ³í¹®Áö ¼ÒÇÁÆ®¿þ¾î ¹× µ¥ÀÌÅÍ °øÇÐ

Á¤º¸Ã³¸®ÇÐȸ ³í¹®Áö ¼ÒÇÁÆ®¿þ¾î ¹× µ¥ÀÌÅÍ °øÇÐ

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) ¹ÎøÇÑ È°¼ºÇÔ¼ö¸¦ ÀÌ¿ëÇÑ ÇÕ¼º°ö ½Å°æ¸ÁÀÇ ¼º´É Çâ»ó
¿µ¹®Á¦¸ñ(English Title) Performance Improvement Method of Convolutional Neural Network Using Agile Activation Function
ÀúÀÚ(Author) Na Young Kong   Young Min Ko   Sun Woo Ko   °ø³ª¿µ   °í¿µ¹Î   °í¼±¿ì  
¿ø¹®¼ö·Ïó(Citation) VOL 09 NO. 07 PP. 0213 ~ 0220 (2020. 07)
Çѱ۳»¿ë
(Korean Abstract)
ÇÕ¼º°ö ½Å°æ¸ÁÀº ÇÕ¼º°öÃþ°ú ¿ÏÀü¿¬°áÃþÀ¸·Î ±¸¼ºµÇ¾î ÀÖ´Ù. ÇÕ¼º°öÃþ°ú ¿ÏÀü¿¬°áÃþÀÇ °¢ Ãþ¿¡¼­´Â ºñ¼±Çü È°¼ºÇÔ¼ö¸¦ »ç¿ëÇÏ°í ÀÖ´Ù. È°¼ºÇÔ¼ö´Â ´º·± °£¿¡ ½ÅÈ£¸¦ Àü´ÞÇÒ ¶§ ÀԷ½ÅÈ£°¡ ÀÏÁ¤ ±âÁØ ÀÌ»óÀÌ¸é ½ÅÈ£¸¦ Àü´ÞÇÏ°í ±âÁØ¿¡ µµ´ÞÇÏÁö ¸øÇÏ¸é ½ÅÈ£¸¦ º¸³»Áö ¾ÊÀ» ¼ö ÀÖ´Â ´º·±ÀÇ Á¤º¸Àü´Þ ¹æ¹ýÀ» ¸ð»çÇÏ´Â ÇÔ¼öÀÌ´Ù. ±âÁ¸ÀÇ È°¼ºÇÔ¼ö´Â ¼Õ½ÇÇÔ¼ö¿Í °ü°è¼ºÀ» °¡Áö°í ÀÖÁö ¾Ê¾Æ ÃÖÀûÇظ¦ ã¾Æ°¡´Â °úÁ¤ÀÌ ´Ê¾îÁö´Â Á¡À» °³¼±Çϱâ À§ÇØ È°¼ºÇÔ¼ö¸¦ ÀϹÝÈ­ÇÑ ¹ÎøÇÑ È°¼ºÇÔ¼ö¸¦ Á¦¾ÈÇÏ¿´´Ù. ¹ÎøÇÑ È°¼ºÇÔ¼öÀÇ ¸Å°³º¯¼ö´Â ¿ªÀüÆÄ °úÁ¤¿¡¼­, ¸Å°³º¯¼ö¿¡ ´ëÇÑ ¼Õ½ÇÇÔ¼öÀÇ 1Â÷ ¹ÌºÐ°è¼ö¸¦ ÀÌ¿ëÇÑ ÇнÀ°úÁ¤À» ÅëÇØ ÃÖÀûÀÇ ¸Å°³º¯¼ö¸¦ ¼±ÅÃÇÏ´Â ¹æ¹ýÀ¸·Î ¼Õ½ÇÇÔ¼ö¸¦ °¨¼Ò½ÃÅ´À¸·Î½á ½ÉÃþ½Å°æ¸ÁÀÇ ¼º´ÉÀ» Çâ»ó½Ãų ¼ö ÀÖ´Ù. MNIST ºÐ·ù¹®Á¦¸¦ ÅëÇÏ¿© ¹ÎøÇÑ È°¼ºÇÔ¼ö°¡ ±âÁ¸ÀÇ È°¼ºÇÔ¼ö¿¡ ºñÇØ ¿ì¿ùÇÑ ¼º´ÉÀ» °¡ÁüÀ» È®ÀÎÇÏ¿´´Ù.
¿µ¹®³»¿ë
(English Abstract)
The convolutional neural network is composed of convolutional layers and fully connected layers. The nonlinear activation function is used in each layer of the convolutional layer and the fully connected layer. The activation function being used in a neural network is a function that simulates the method of transmitting information in a neuron that can transmit a signal and not send a signal if the input signal is above a certain criterion when transmitting a signal between neurons. The conventional activation function does not have a relationship with the loss function, so the process of finding the optimal solution is slow. In order to improve this, an agile activation function that generalizes the activation function is proposed. The agile activation function can improve the performance of the deep neural network in a way that selects the optimal agile parameter through the learning process using the primary differential coefficient of the loss function for the agile parameter in the backpropagation process. Through the MNIST classification problem, we have identified that agile activation functions have superior performance over conventional activation functions.
Å°¿öµå(Keyword) Convolutional Neural Network   ¿ªÀüÆÄ   ÇнÀ   Agile Activation Function   Backpropagation   Learning   ÇÕ¼º°ö ½Å°æ¸Á   ¹ÎøÇÑ È°¼ºÇÔ¼ö  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå