(±¸)Á¤º¸°úÇÐȸ ³í¹®Áö
Current Result Document :
ÇѱÛÁ¦¸ñ(Korean Title) |
±â´ë Ãâ·Â°ªÀ» ÀÌ¿ëÇÑ ¿À·ù¿ªÀüÆÄÀÇ ¼º´É °³¼± |
¿µ¹®Á¦¸ñ(English Title) |
Improving the Performance of Error Back - Propagation with Expected Target Values |
ÀúÀÚ(Author) |
±èº´Ã¶
±èÅÂÁø
±èµ¿±Ô
Á¶µ¿¼·
ȲÈñ¿ë
Byung Cheol Kim
Tae Jin Kim
Dong Kyu Kim
Dong Sub Cho
Hee Yeung Hwang
|
¿ø¹®¼ö·Ïó(Citation) |
VOL 20 NO. 07 PP. 1039 ~ 1049 (1993. 07) |
Çѱ۳»¿ë (Korean Abstract) |
º» ³í¹®¿¡¼´Â Ç¥ÁØ ¿À·ù¿ªÀüÆÄ(error back-propagation) ÇнÀ±ÔÄ¢ÀÌ °®°í ÀÖ´Â ¹®Á¦ÁßÀÇ ÇϳªÀÎ ½Å°æ¸Á ¸¶ºñ(network paralysis) Çö»óÀ» ¿ÏÈÇÏ°í ¶ÇÇÑ ¼ö·Å¼Óµµ(convergence speed)µµ Çâ»ó½ÃŲ °³¼±µÈ ¿À·ù¿ªÀüÆÄ ÇнÀ±ÔÄ¢À» Á¦¾ÈÇÏ°íÀÚ ÇÑ´Ù. Ç¥ÁØ ¿À·ù¿ªÀüÆÄ ÇнÀ±ÔÄ¢¿¡¼´Â °¢°¢ÀÇ ¿¬°á°µµ(connection weight)¿¡ ´ëÇÑ ¿ÀÂ÷Ç¥¸é(error surface)ÀÌ °íÁ¤µÇ¾î ÀÖ´Ù°í °¡Á¤ÇÏ°í ÀÖ´Ù. ÇÏÁö¸¸ ½ÇÁ¦·Î ÇнÀ¿¡ ÀÇÇØ °¢°¢ÀÇ ¿¬°á°µµ¿¡ º¯°æÇÏ°Ô µÇ¸é, º¯°æµÈ ¿¬°á°µµ¿¡ ÀÇÇØ °¢ ³ëµåÀÇ Ãâ·Â°ªÀÌ º¯ÇÏ°Ô µÇ¾î ¾ÆÁ÷ º¯°æµÇÁö ¾ÊÀº ¿¬°á°µµ¿¡ ´ëÇÑ ¿ÀÂ÷Ç¥¸éÀÌ À̵¿ÇÏ°Ô µÇ¹Ç·Î °æ»çÃßÀû(gradient descent)½Ã ¿ÀÂ÷°¡ ¹ß»ýÇÏ°Ô µÈ´Ù. °³¼±µÈ ¿À·ù¿ªÀüÆÄ ÇнÀ±ÔÄ¢Àº ¿ÀÂ÷Ç¥¸éÀÇ À̵¿¿¡ ÀÇÇÑ °æ»çÃßÀû ¿ÀÂ÷¸¦ º¸Á¤Çϱâ À§ÇØ, Ãâ·Â´Ü¿¡¼ÀÇ ¿ÀÂ÷¸¦ ÀÌ¿ëÇؼ °¢ ³ëµåÀÇ Ãâ·Â°ªÀ» ¿¹»óÇÏ°í, ¿¹»óµÈ Ãâ·Â°ªÀ» ÀÌ¿ëÇØ ¿¬°á°µµ¸¦ ¼öÁ¤ÇÏ´Â ÇнÀ±ÔÄ¢ÀÌ´Ù. Á¦¾ÈµÈ ÇнÀ±ÔÄ¢À» ¿©·¯°¡ÁöÀÇ ÇнÀȯ°æ¿¡¼ ½ÇÇèÇØ º» °á°ú, ÇнÀȯ°æÀÇ º¯È¿¡ Å©°Ô ¿µÇâÀ» ¹ÞÁö ¾ÊÀ¸¸ç º¸´Ù ºü¸£°í ¾ÈÁ¤µÇ°Ô ÇнÀÇÒ ¼ö ÀÖÀ½À» ¾Ë ¼ö ÀÖ¾ú´Ù. |
¿µ¹®³»¿ë (English Abstract) |
In this paper we propose an improved error back-propagation learning rule that decreases the chance of falling into network paralysis and improves convergence speed. The standard error backpropagation learning rule assumes that the error surface seen by each connection weight is not affected by all the other connection weighs that are changing at the same time. But in practice, when a connection weight is updated, the new weight value causes a change in the error surfaces of other, yet to be updated, connection weights which leads to errors in gradient descent for these connection weights.
The improved error back-propagation learning rule compensates for gradient descent error due to affect of other changing connection weithts by predicting the output values for each node and updating connection weights using the expected output values. Experimental results show the proposed learning rule to have improvements in convergence speed and to be robust under various experimental environments. |
Å°¿öµå(Keyword) |
|
ÆÄÀÏ÷ºÎ |
PDF ´Ù¿î·Îµå
|