A study of adaboost with SVM based weak learners

RIS ID

54397

Publication Details

Li, X., Wang, L. & Sung, E. (2005). A study of adaboost with SVM based weak learners. International Joint Conference on Neural Networks (IJCNN) (pp. 196-201). Australia: IEEE.

Abstract

In this article, we focus on designing an algorithm, named AdaBoostSVM, using SVM as weak learners for AdaBoost. To obtain a set of effective SVM weak learners, this algorithm adaptively adjusts the kernel parameter in SVM instead of using a fixed one. Compared with the existing AdaBoost methods, the AdaBoostSVM has advantages of easier model selection and better generalization performance. It also provides a possible way to handle the over-fitting problem in AdaBoost. An improved version called Diverse AdaBoostSVM is further developed to deal with the accuracy/diversity dilemma in Boosting methods. By implementing some parameter adjusting strategies, the distributions of accuracy and diversity over these SVM weak learners are tuned to achieve a good balance. To the best of our knowledge, such a mechanism that can conveniently and explicitly balances this dilemma has not been seen in the literature. Experimental results demonstrated that both proposed algorithms achieve better generalization performance than AdaBoost using other kinds of weak learners. Benefiting from the balance between accuracy and diversity, the Diverse AdaBoostSVM achieves the best performance. In addition, the experiments on unbalanced data sets showed that the AdaBoostSVM performed much better than SVM.

Please refer to publisher version or contact your library.

Share

COinS