• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸°úÇÐȸ ³í¹®Áö > Á¤º¸°úÇÐȸ ³í¹®Áö B : ¼ÒÇÁÆ®¿þ¾î ¹× ÀÀ¿ë

Á¤º¸°úÇÐȸ ³í¹®Áö B : ¼ÒÇÁÆ®¿þ¾î ¹× ÀÀ¿ë

Current Result Document : 7 / 20 ÀÌÀü°Ç ÀÌÀü°Ç   ´ÙÀ½°Ç ´ÙÀ½°Ç

ÇѱÛÁ¦¸ñ(Korean Title) ¾Ó»óºí ±¸¼ºÀ» ÀÌ¿ëÇÑ SVM ºÐ·ù¼º´ÉÀÇ Çâ»ó
¿µ¹®Á¦¸ñ(English Title) Improving SVM Classification by Constructing Ensemble
ÀúÀÚ(Author) Á¦È«¸ð   ¹æ½Â¾ç  
¿ø¹®¼ö·Ïó(Citation) VOL 30 NO. 03 PP. 0251 ~ 0258 (2003. 04)
Çѱ۳»¿ë
(Korean Abstract)
Support Vector Machine(SVM)Àº À̷лóÀ¸·Î ÁÁÀº ÀϹÝÈ­ ¼º´ÉÀ» º¸ÀÌÁö¸¸, ½ÇÁ¦ÀûÀ¸·Î ±¸ÇöµÈ SVMÀº ÀÌ·ÐÀûÀΠ¼º´É¿¡ ¹ÌÄ¡Áö ¸øÇÑ´Ù. ÁÖ µÈ ÀÌÀ¯´Â ½Ã°£, °ø°£»óÀÇ ³ôÀº º¹Àâµµ·Î ÀÎÇØ ±Ù»çÈ­µÈ ¾Ë°í¸®µëÀ¸·Î ±¸ÇöÇϱ⠶§¹®ÀÌ´Ù. º» ³í¹®Àº SVMÀÇ ºÐ·ù¼º´ÉÀ» Çâ»ó½ÃÅ°±â À§ÇØ  Bagging(Bootstrap aggregating)°ú BoostingÀ» ÀÌ¿ëÇÑ SVM ¾Ó»óºí ±¸Á¶ÀÇ ±¸¼ºÀ» Á¦¾ÈÇÑ´Ù. SVM ¾Ó»óºíÀÇ ÇнÀ¿¡¼­ BaggingÀº °¢°¢ÀÇ SVMÀÇ ÇнÀµ¥ÀÌŸ´Â Àüü µ¥ÀÌŸ ÁýÇÕ¿¡¼­ ÀÓÀÇÀûÀ¸·Î ÀϺΠÃßÃâµÇ¸ç, BoostingÀº SVM ºÐ·ù±âÀÇ ¿¡·¯¿Í ¿¬°üµÈ È®·üºÐÆ÷¿¡ µû¶ó ÇнÀµ¥ÀÌŸ¸¦ ÃßÃâÇÑ´Ù. ÇнÀ´Ü°è¸¦ ¸¶Ä¡¸é ´Ù¼ö°á(Majority voting), ÃÖ¼ÒÀÚ½ÂÃßÁ¤¹ý(LSE:Least Square estimation), 2´Ü°è °èÃþÀû SVMµîÀÇ ±â¹ý¿¡ °³°³ÀÇ SVMµéÀÇ Ãâ·Â °ªµéÀÌ ÅëÇյǾîÁø´Ù. IRIS ºÐ·ù, Çʱâü ¼ýÀÚÀνÄ, ¾ó±¼/ºñ¾ó±¼ ºÐ·ù¿Í °°Àº ¿©·¯ ½ÇÇèµéÀÇ °á°úµéÀº Á¦¾ÈµÈ SVM ¾Ó»óºíÀÇ ºÐ·ù¼º´ÉÀÌ ´ÜÀÏ SVMº¸´Ù ¶Ù¾î³²À» º¸¿©ÁØ´Ù. 
¿µ¹®³»¿ë
(English Abstract)
A support vector machine (SVM) is supposed to provide a good generalization performance, but the actual performance of a actually implemented SVM is often far from the theoretically expected level. This is largely because the implementation is based on an approximated algorithm, due to the high complexity of time and space. To improve this limitation, we propose ensemble of SVMs by using Bagging (bootstrap aggregating) and Boosting. By a Bagging stage each individual SVM is trained independently using randomly chosen training samples via a bootstrap technique. By a Boosting stage an individual SVM is trained by choosing training samples according to their probability distribution. The probability distribution is updated by the error of independent classifiers, and the process is iterated. After the training stage, they are aggregated to make a collective decision in several ways, such as majority voting, the LSE(least squares estimation)-based weighting, and double layer hierarchical combining. The simulation results for IRIS data classification, the hand-written digit recognition and Face detection show that the proposed SVM ensembles greatly outperforms a single SVM in terms of classification accuracy.
Å°¿öµå(Keyword) SVM   ¾ó±¼Å½Áö   IRIS µ¥ÀÌŸ ºÐ·ù  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå