Hostname: page-component-848d4c4894-nr4z6 Total loading time: 0 Render date: 2024-06-08T09:29:17.696Z Has data issue: false hasContentIssue false

Bagging and boosting variants for handling classifications problems: a survey

Published online by Cambridge University Press:  23 August 2013

Sotiris B. Kotsiantis*
Affiliation:
Department of Mathematics, Educational Software Development Laboratory, University of Patras, Patras 26504, Greece; e-mail: sotos@math.upatras.gr

Abstract

Bagging and boosting are two of the most well-known ensemble learning methods due to their theoretical performance guarantees and strong experimental results. Since bagging and boosting are an effective and open framework, several researchers have proposed their variants, some of which have turned out to have lower classification error than the original versions. This paper tried to summarize these variants and categorize them into groups. We hope that the references cited cover the major theoretical issues, and provide access to the main branches of the literature dealing with such methods, guiding the researcher in interesting research directions.

Type
Articles
Copyright
Copyright © Cambridge University Press 2013 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Aksela, M., Laaksonen, J. 2006. Using diversity of errors for selecting members of a committee classifier. Journal of Pattern Recognition 39(4), 608623.Google Scholar
Alaiz-Rodriguez, R. 2008. Local decision bagging of binary neural classifiers. Lecture Notes in Artificial Intelligence 5032, 112.Google Scholar
Amores, J., Sebe, N., Radeva, P. 2006. Boosting the distance estimation application to the K-nearest neighbor classifier. Pattern Recognition Letters 27, 201209.Google Scholar
Babenko, B., Yang, M. H., Belongie, S. 2009. A family of online boosting algorithms. In IEEE 12th International Conference on Computer Vision Workshops, September 27–October 4, Kyoto, 1346–1353.Google Scholar
Bakker, B., Heskes, T. 2003. Clustering ensembles of neural network models. Neural Networks 16(2), 261269.Google Scholar
Banfield, R. E., Hall, L. O., Bowyer, K. W. 2007. A comparison of decision tree ensemble creation techniques. IEEE Transactions Pattern Analysis and Machine Intelligence 29, 173180.Google Scholar
Banfield, R. E., Hall, L. O., Bowyer, K. W., Kegelmeyer, W. P. 2005. Ensemble diversity measures and their application to thinning. Information Fusion 6(1), 4962.Google Scholar
Baszczyski, J., Sowiski, R., Stefanowski, J. 2010. Variable consistency bagging ensembles. Lecture Notes in Computer Science 5946, 4052.CrossRefGoogle Scholar
Bauer, E., Kohavi, R. 1999. An empirical comparison of voting classification algorithms: bagging, voting, and variants. Machine Learning 36, 105139.Google Scholar
Bifet, A., Holmes, G., Pfahringer, B. 2010a. Leveraging bagging for evolving data streams. Lecture Notes in Artificial Intelligence 6321, 135150.Google Scholar
Bifet, A., Holmes, G., Kirkby, R., Pfahringer, B. 2010b. MOA: massive online analysis. Journal of Machine Learning Research 11, 16011604.Google Scholar
Bifet, A., Holmes, G., Pfahringer, B., Kirkby, R., Gavalda, R. 2009a. New ensemble methods for evolving data streams. In 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, 139–148.Google Scholar
Bifet, A., Holmes, G., Pfahringer, B., Gavalda, R. 2009b. Improving adaptive bagging methods for evolving data streams. Lecture Notes in Artificial Intelligence 5828, 2337.Google Scholar
Bradley, J. K., Schapire, R. E. 2008. Filterboost: regression and classication on large datasets. Advances in Neural Information Processing Systems 20, 185192.Google Scholar
Breiman, L. 1996. Bagging predictors. Machine Learning 24, 123140.Google Scholar
Breiman, L. 1999. Pasting small votes for classification in large databases and on-line. Machine Learning 36(1–2), 85103.Google Scholar
Breiman, L. 1999b. Prediction games and arcing algorithms. Neural Computation 11(7), 14931517.Google Scholar
Breiman, L. 2000. Randomizing outputs to increase prediction accuracy. Machine Learning 40, 229242.Google Scholar
Breiman, L. 2001. Random forests. Machine Learning 45(1), 532.Google Scholar
Brown, G., Wyatt, J., Harris, R., Yao, X. 2005. Diversity creation methods: a survey and categorisation. Information Fusion 6(1), 520.Google Scholar
Bryll, R., Gutierrez-Osuna, R., Quek, F. 2003. Attribute bagging: improving accuracy of classifier ensemble by using random feature subsets. Pattern Recognition 36, 12911302.Google Scholar
Brzezinski, D., Stefanowski, J. 2013. Reacting to different types of concept drift: the accuracy updated ensemble algorithm. IEEE Transactions on Neural Networks and Learning Systems 24(7), in press.Google Scholar
Buhlmann, P. 2012. Bagging, boosting and ensemble methods. Handbook of Computational Statistics – Springer Handbooks of Computational Statistics, 9851022.Google Scholar
Buhlmann, P., Yu, B. 2002. Analyzing bagging. The Annals of Statistics 30(4), 927961.CrossRefGoogle Scholar
Buja, W. S. 2006. Observations on bagging. Statistica Sinica 16, 323351.Google Scholar
Cai, Q-T., Chun-Yi, P., Chang-Shui, Z. 2008a. A weighted subspace approach for improving bagging performance. In IEEE International Conference on Acoustics, Speech and Signal Processing, March 31–April 4, Las Vegas, 3341–3344.Google Scholar
Cai, Q-T., Chun-Yi, P., Chang-Shui, Z. 2008b. Cost-sensitive boosting algorithms as gradient descent. In IEEE International Conference on Acoustics, Speech and Signal Processing, March 31–April 4, Las Vegas, 2009–2012.Google Scholar
Chawla, N. V., Hall, L. O., Bowyer, K. W., Kegelmeyer, W. P. 2002. SMOTE: synthetic minority oversampling technique. Journal of Artificial Intelligence Research 16, 321357.Google Scholar
Chawla, N. V., Lazarevic, A., Hall, L. O., Bowyer, K. 2003. SMOTE-Boost: improving prediction of the minority class in boosting. Principles of Knowledge Discovery in Databases, PKDD-2003, 107119.Google Scholar
Coelho, A., Nascimento, D. 2010. On the evolutionary design of heterogeneous bagging models. Neurocomputing 73(16–18), 33193322.Google Scholar
Croux, C., Joossens, K., Lemmens, A. 2007. Trimmed bagging. Computational Statistics and Data Analysis 52, 362368.Google Scholar
Derbeko, P., Yaniv, R., Meir, R. 2002. Variance optimized bagging. Lecture Notes in Artificial Intelligence 2430, 6072.Google Scholar
Dietterich, T. 2000. An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Machine Learning 40, 139157.Google Scholar
Domingos, P. 2000. A unified bias-variance decomposition for zero-one and squared loss. In Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, 564–569.Google Scholar
Elwel, R., Polikar, R. 2011. Incremental learning of concept drift in nonstationary environments. IEEE Transactions on Neural Networks 22(10), 15171531.Google Scholar
Esposito, R., Saitta, L. 2003. Monte Carlo theory as an explanation of bagging and boosting. In 18th International Joint Conference on Artificial Intelligence IJCAI'03, Acapulco, Mexico, 499–504.Google Scholar
Faisal, Z., Uddin, M. M., Hirose, H. 2010. On selecting additional predictive models in double bagging type ensemble method. Lecture Notes in Computer Science 6019, 199208.Google Scholar
Fan, W., Stolfo, S. J., Zhang, J., Chan, P. K. 1999. AdaCost: misclassification cost-sensitive boosting. In 16th International Conference on Machine Learning, Slovenia, 97–105.Google Scholar
Frank, E., Pfahringer, B. 2006. Improving on bagging with input smearing. Lecture Notes in Artificial Intelligence 3918, 97106.Google Scholar
Freund, Y., Schapire, R. E. 1996a. Experiments with a new boosting algorithm. In 13th International Conference on Machine Learning, Bari, Italy, 148–156.Google Scholar
Freund, Y., Schapire, R. E. 1996b. Game theory, on-line prediction and boosting. In Ninth Annual Conference On Computational Learning Theory-COLT '96, Desenzano sul Garda, Italy, 325–332.Google Scholar
Freund, Y., Schapire, R. E. 1997. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119139.Google Scholar
Friedman, J. H., Hall, P. 2007. On bagging and nonlinear estimation. Journal of Statistical Planning and Inference 137(3), 669683.Google Scholar
Friedman, J. H., Hastie, T., Tibshirani, R. 2000. Additive logistic regression: a statistical view of boosting. Annals of Statistics 38, 367378.Google Scholar
Fu, Q., Hu, S. X., Zhao, S. Y. 2005. Clustering-based selective neural network ensemble. Journal of Zhejiang University Science 6A(5), 387392.Google Scholar
Fumera, G., Roli, F., Serrau, A. 2005. Dynamics of variance reduction in bagging and other techniques based on randomisation. Lecture Notes in Computer Science 3541, 316325.CrossRefGoogle Scholar
Fumera, G., Roli, F., Serrau, A. 2008. A theoretical analysis of bagging as a linear combination of classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(7), 12931299.Google Scholar
Fushiki, T. 2010. Bayesian bootstrap prediction. Journal of Statistical Planning and Inference 140, 6574.Google Scholar
Galar, M., Fernandez, A., Barrenechea, E., Bustince, H., Herrera, F. 2012. A review on ensembles for the class imbalance problem: bagging-boosting and hybrid-based approaches. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 42(4), 463484.Google Scholar
Gama, J. 2010. Knowledge Discovery from Data Streams. CRC Press.Google Scholar
Gambina, A., Szczureka, E., Dutkowski, J., Bakunc, M., Dadlezc, M. 2009. Classification of peptidemass fingerprint data by novel no-regret boosting method. Computers in Biology and Medicine 39, 460473.Google Scholar
Gao, W., Zhou, Z-H. 2010. Approximation Stability and Boosting. Lecture Notes in Artificial Intelligence 6331, 5973.Google Scholar
Gao, Y., Gao, F., Guan, X. 2010. Improved boosting algorithm with adaptive filtration. In 8th World Congress on Intelligent Control and Automation, July 6–9, Jinan, China, 3173–3178.Google Scholar
Garcia-Pedrajas, N. 2009. Supervised projection approach for boosting classifiers. Pattern Recognition 42, 17421760.Google Scholar
Garcia-Pedrajas, N., Ortiz-Boyer, D. 2008. Boosting random subspace method. Neural Networks 21, 13441362.Google Scholar
Garcia-Pedrajas, N., Ortiz-Boyer, D. 2009. Boosting k-nearest neighbor classifier by means of input space projection. Expert Systems with Applications 36, 1057010582.Google Scholar
Gomez-Verdejo, V., Ortega-Moral, M., Arenas-Garcia, J., Figueiras-Vidal, A. 2006. Boosting by weighting critical and erroneous samples. Neurocomputing 69, 679685.Google Scholar
Grabner, H., Bischof, H. 2006. On-line boosting and vision. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 17–22, 260–267.Google Scholar
Grandvalet, Y. 2004. Bagging equalizes inuence. Machine Learning 55, 251270.Google Scholar
Hall, L., Baneld, R., Bowyer, K., Kegelmeyer, W. 2007. Boosting lite—handling larger datasets and slower base classifiers. Lecture Notes in Computer Science 4472, 161170.Google Scholar
Hall, P., Samworth, R. J. 2005. Properties of bagged nearest neighbour classifiers. Journal of the Royal Statistical Society Series B 67(3), 363379.Google Scholar
Hernandez-Lobato, D., Martinez-Munoz, G., Suarez, A. 2007. Out of bootstrap estimation of generalization error curves in bagging ensembles. Lecture Notes in Computer Science 4881, 4756.Google Scholar
Hido, S., Kashima, H., Takahashi, Y. 2009. Roughly balanced bagging for imbalanced data. Statistical Analysis and Data Mining 2, 412426.Google Scholar
Ho, T. K. 1998. The random subspace method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(8), 832844.Google Scholar
Hothorn, T., Lausen, B. 2003. Double-bagging: combining classifiers by bootstrap aggregation. Pattern Recognition 36(6), 13031309.Google Scholar
Hothorn, T., Lausen, B. 2005. Bundling classifiers by bagging trees. Computational Statistics and Data Analysis 49, 10681078.Google Scholar
Jiang, Y., Jin-Jiang, L., Gang, L., Honghua, D., Zhi-Hua, Z. 2005. Dependency bagging. Lecture Notes in Artificial Intelligence 3641, 491500.Google Scholar
Jimnez-Gamero, M. D., Muoz-Garca, J., Pino-Mejas, R. 2004. Reduced bootstrap for the median. Statistica Sinica 14, 11791198.Google Scholar
Joshi, M. V., Kumar, V., Agarwal, R. C. 2001. Evaluating boosting algorithms to classify rare classes: comparison and improvements. IEEE International Conference on Data Mining, 257264.Google Scholar
Kalai, A., Servedio, R. 2005. Boosting in the presence of noise. Journal of Computer and System Sciences 71, 266290.Google Scholar
Khoshgoftaar, T. M., Van Hulse, J., Napolitano, A. 2011. Comparing boosting and bagging techniques with noisy and imbalanced data. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans 41(3), 552568.Google Scholar
Koco, S., Capponi, C. 2011. A boosting approach to multiview classification with cooperation. In European Conference on Machine Learning and Knowledge Discovery in Databases ECML-PKDD'11, Athens, 209–228.Google Scholar
Kotsiantis, S., Pintelas, P. 2004. Combining bagging and boosting. International Journal of Computational Intelligence 1(4), 324333.Google Scholar
Kotsiantis, S. B., Kanellopoulos, D. 2010. Bagging different instead of similar models for regression and classification problems. International Journal of Computer Applications in Technology 37(1), 2028.CrossRefGoogle Scholar
Kotsiantis, S. B., Kanellopoulos, D., Pintelas, P. E. 2006. Local boosting of decision stumps for regression and classification problems. Journal of Computers 1(4), 3037.Google Scholar
Kotsiantis, S. B., Tsekouras, G. E., Pintelas, P. E. 2005. Local bagging of decision stumps. In 18th International Conference on Innovations in Applied Artificial Intelligence, Bari, Italy, 406–411.Google Scholar
Kuncheva, L. I., Skurichina, M., Duin, R. P. W. 2002. An experimental study on diversity for bagging and boosting with linear classifiers. Information Fusion 3, 245258.Google Scholar
Kuncheva, L., Whitaker, C. J. 2002. Using diversity with three variants of boosting: aggressive. Lecture Notes in Computer Science 2364, 8190.Google Scholar
Kuncheva, L. I., Whitaker, C. J. 2003. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Journal of Machine Learning 51, 181207.Google Scholar
Kuncheva, L. 2004a. Combining Pattern Classifiers: Methods and Algorithms. Wiley.Google Scholar
Kuncheva, L. 2004b. Classifier ensembles for changing environments. Lecture Notes in Computer Science 3077, 115.Google Scholar
Latinne, P., Debeir, O., Decaestecker, Ch. 2000. Mixing bagging and multiple feature subsets to improve classification accuracy of decision tree combination. In 10th Belgian-Dutch Conference on Machine Learning. Tilburg University, 15–22.Google Scholar
Le, D.-D., Satoh, S. 2007. Ent-boost: boosting using entropy measures for robust object detection. Pattern Recognition Letters 28, 10831090.Google Scholar
Lee, H., Clyde, M. A. 2004. Lossless online bayesian bagging. Journal of Machine Learning Research 5, 143151.Google Scholar
Leistner, C., Saffari, A., Roth, P., Bischof, H. 2009. On robustness of on-line boosting—a competitive study. In IEEE 12th International Conference on Computer Vision Workshops, Kyoto, Japan, 1362–1369.Google Scholar
Leskes, B., Torenvliet, L. 2008. The value of agreement a new boosting algorithm. Journal of Computer and System Sciences 74, 557586.Google Scholar
Li, C. 2007. Classifying imbalanced data using a bagging ensemble variation (BEV). 45th Annual Southeast Regional Conference, 203208.Google Scholar
Li, G.-Z., Yang, J. Y. 2008. Feature selection for ensemble learning and its application. In Machine Learning in Bioinformatics, Zhang, Y.-Q. & Rajapakse, J. C. (eds.). Wiley.Google Scholar
Li, W., Gao, X., Zhu, Y., Ramesh, V., Boult, T. 2005. On the small sample performance of boosted classifiers. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), June 20–25, 574–581.Google Scholar
Littlestone, N., Warmuth, M. K. 1994. The weighted majority algorithm. Information and Computation 108(2), 212261.Google Scholar
Liu, X., Yu, T. 2007. Gradient feature selection for online boosting. In IEEE 11th International Conference on Computer Vision (ICCV 2007), Rio de Janeiro, Brazil, 1–8.Google Scholar
Lu, Y., Tian, Q., Huang, T. 2007. Interactive boosting for image classification. Lecture Notes in Computer Science 4472, 180189.Google Scholar
Markowitz, H. 1959. Portfolio Selection: Efficient Diversification of Investments. Yale University Press.Google Scholar
Martinez-Munoz, G., Suarez, A. 2007. Using boosting to prune bagging ensembles. Pattern Recognition Letters 28(1), 156165.Google Scholar
Martinez-Munoz, G., Suarez, A. 2010. Out-of-bag estimation of the optimal sample size in bagging. Pattern Recognition 43, 143152.Google Scholar
Martinez-Munoz, G., Hernandez-Lobato, D., Suarez, A. 2007. Selection of decision stumps in bagging ensembles. Lecture Notes in Computer Science 4668, 319328.Google Scholar
Mason, L., Baxter, J., Bartlett, P., Frean, M. 1999. Functional gradient techniques for combining hypotheses. Advances in Large Margin Classifiers. MIT Press, 221–247.Google Scholar
Melville, P., Mooney, R. 2005. Creating diversity in ensembles using artificial data. Information Fusion 6, 99111.Google Scholar
Minku, L. L., Xin, Y. 2012. DDD: a new ensemble approach for dealing with concept drift. IEEE Transactions on Knowledge and Data Engineering 24(4), 619633.Google Scholar
Nanculef, R., Valle, C., Allende, H., Moraga, C. 2007. Bagging with asymmetric costs for misclassified and correctly classified examples. Lecture Notes in Computer Science 4756, 694703.Google Scholar
Oza, N. C. 2003. Boosting with averaged weight vectors. Lecture Notes in Computer Science 2709, 1524.Google Scholar
Oza, N. 2005. Online bagging and boosting. In 2005 IEEE International Conference on Systems, Man and Cybernetics, October 10–12, Hawaii, USA, 2340–2345.Google Scholar
O'Sullivan, J., Langford, J., Caruna, R., Blum, A. 2000. Featureboost: a metalearning algorithm that improves model robustness. 17th International Conference on Machine Learning, 703710.Google Scholar
Panov, P., Dzeroski, S. 2007. Combining bagging and random subspaces to create better ensembles. Lecture Notes in Computer Science 4723, 118129.Google Scholar
Pelossof, R., Jones, M., Vovsha, I., Rudin, C. 2009. Online coordinate boosting. In IEEE 12th International Conference on Computer Vision Workshops, Kyoto, Japan, 1354–1361.Google Scholar
Peng, J., Barbu, C., Seetharaman, G., Fan, W., Wu, X., Palaniappan, K. 2011. ShareBoost: boosting for multi-view learning with performance guarantees. Lecture Notes in Computer Science 6912, 597612.Google Scholar
Phama, T., Smeulders, A. 2008. Quadratic boosting. Pattern Recognition 41, 331341.Google Scholar
Pino-Mejias, R., Jimenez-Gamero, M., Cubiles-de-la-Vega, M., Pascual-Acosta, A. 2008. Reduced bootstrap aggregating of learning algorithms. Pattern Recognition Letters 29, 265271.Google Scholar
Pino-Mejias, R., Cubiles-de-la-Vega, M., Lpez-Coello, M., Silva-Ramrez, E., Jimnez-Gamero, M. 2004. Bagging classification models with reduced bootstrap. Lecture Notes in Computer Science 3138, 966973.Google Scholar
Puuronen, S., Skrypnyk, I., Tsymbal, A. 2001. Ensemble feature selection based on contextual merit and correlation heuristics. Lecture Notes in Computer Science 2151, 155168.Google Scholar
Redpath, D. B., Lebart, K. 2005. Boosting feature selection. Lecture Notes in Computer Science 3686, 305314.Google Scholar
Reyzin, L., Schapire, R. E. 2006. How boosting the margin can also boost classifier complexity. In 23rd International Conference on Machine Learning, Pittsburgh, 753–760.Google Scholar
Rodriguez, J. J., Kuncheva, L. I., Alonso, C. J. 2006. Rotation forest: a new classier ensemble method. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(10), 16191630.Google Scholar
Rodriguez, J. J., Maudes, J. 2008. Boosting recombined weak classifiers. Pattern Recognition Letters 29, 10491059.Google Scholar
Schapire, R., Freund, Y., Bartlett, P., Lee, W. S. 1997. Boosting the margin: a new explanation for the effectiveness of voting methods. In Fourteenth International Conference on Machine Learning-ICML'97, 322330.Google Scholar
Schapire, R. E., Singer, Y. 1999. Improved boosting algorithms using confidence-rated predictions. Machine Learning 37, 297336.Google Scholar
Seiffert, C., Khoshgoftaar, T., Hulse, J., Napolitano, A. 2008. Resampling or reweighting: a comparison of boosting implementations. In 20th IEEE International Conference on Tools with Artificial Intelligence-ICTAI'08, Ohio, USA, 445–451.Google Scholar
Seni, G., Elder, J. 2010. Ensemble methods in data mining: improving accuracy through combining predictions. Synthesis Lectures on Data Mining and Knowledge Discovery 2(1), 1126.Google Scholar
Servedio, R. A. 2003. Smooth boosting and learning with malicious noise. The Journal of Machine Learning Research 4, 633648.Google Scholar
Shen, C., Li, H. 2010. Boosting through optimization of margin distributions. IEEE Transactions on Neural Networks 21(4), 659667.Google Scholar
Shirai, S., Kudo, M., Nakamura, A. 2008. Bagging, random subspace method and biding. Lecture Notes in Computer Science 5342, 801810.Google Scholar
Shirai, S., Kudo, M., Nakamura, A. 2009. Comparison of bagging and boosting algorithms on sample and feature weighting. Lecture Notes in Computer Science 5519, 2231.Google Scholar
Skurichina, M., Duin, R. 2000. The role of combining rules in bagging and boosting. Lecture Notes in Computer Science 1876, 631640.Google Scholar
Sohn, S. Y., Shin, H. W. 2007. Experimental study for the comparison of classifier combination methods. Pattern Recognition 40, 3340.Google Scholar
Stefanowski, J. 2007. Combining answers of sub-classifiers in the bagging-feature ensembles. Lecture Notes in Artificial Intelligence 4585, 574583.Google Scholar
Su, X., Khoshgoftarr, T. M., Zhu, X. 2008. VoB predictors: voting on bagging classifications. In 19th International Conference on Pattern Recognition-ICPR'2008, December 8–11, Florida, USA, 1–4.Google Scholar
Sun, Y., Kamel, M. S., Wong, A., Wang, Y. 2007. Cost-sensitive boosting for classification of imbalanced data. Pattern Recognition 40, 33583378.Google Scholar
Tang, W. 2003. Selective ensemble of decision trees. Lecture Notes in Artificial Intelligence 2639, 476483.Google Scholar
Tao, D., Tang, X., Li, X., Wu, X. 2006. Asymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(7), 10881099.Google ScholarPubMed
Terabe, M., Washio, T., Motoda, H. 2001. The effect of subsampling rate on subagging performance. ECML2001/PKDD2001, 4855.Google Scholar
Ting, K., Witten, I. 1997. Stacking bagged and dagged models. In Fourteenth International Conference on Machine Learning-ICML ’97, Tennessee, USA, 367–375.Google Scholar
Torres-Sospedra, J., Hernandez-Espinosa, C., Fernandez-Redondo, M. 2007 a. Mixing aveboost and conserboost to improve boosting methods. In International Joint Conference on Neural Networks, Orlando, Florida, USA, August 12–17, 672–677.Google Scholar
Torres-Sospedra, J., Hernandez-Espinosa, C., Fernandez-Redondo, M. 2007b. Designing a multilayer feedforward ensemble with the weighted conservative boosting algorithm. In International Joint Conference on Neural Networks, Orlando, Florida, USA, August 12–17, 684–689.Google Scholar
Torres-Sospedra, J., Hernandez-Espinosa, C., Fernandez-Redondo, M. 2008. Researching on combining boosting ensembles. In International Joint Conference on Neural Networks-IJCNN2008, Hong Kong, 2290–2295.Google Scholar
Tsaoa, C., Chang, Y. I. 2007. A stochastic approximation view of boosting. Computational Statistics and Data Analysis 52, 325334.Google Scholar
Tsymbal, A., Puuronen, S. 2000. Bagging and boosting with dynamic integration of classifiers. Lecture Notes in Artificial Intelligence 1910, 116125.Google Scholar
Valentini, G., Masuli, F. 2002. Ensembles of learning machines. Lecture Notes in Computer Science 2486, 319.Google Scholar
Valentini, G., Dietterich, T. G. 2003. Low bias bagged support vector machines. In 20th International Conference on Machine Learning ICML-2003, Washington, USA, 752–759.Google Scholar
Vezhnevets, A., Barinova, O. 2007. Avoiding boosting overfitting by removing confusing shamples. In ECML2007, Poland, September, 430–441.Google Scholar
Wall, R., Cunningham, P., Walsh, P., Byrne, S. 2003. Explaining the output of ensembles in medical decision support on a case by case basis. Artificial Intelligence in Medicine 28(2), 191206.Google Scholar
Wang, X., Wang, H. 2006. Classification by evolutionary ensembles. Pattern Recognition 39, 595607.Google Scholar
Wang, C.-M., Yang, H.-Z., Li, F.-C., Fu, R.-X. 2006. Two stages based adaptive sampling boosting method. In Fifth International Conference on Machine Learning and Cybernetics, Dalian, August 13–16, 2925–2927.Google Scholar
Wang, S., Yao, X. 2009. Diversity analysis on imbalanced data sets by using ensemble models. IEEE Symposium on Computational Intelligence and Data Mining, 324331.Google Scholar
Wang, W., Zhou, Z.-H. 2010. A new analysis of co-training. In 27th International Conference on Machine Learning-ICML'10, Haifa, Israel, 1135–1142.Google Scholar
Webb, G. I. 2000. MultiBoosting: a technique for combining boosting and wagging. Machine Learning 40, 159196.Google Scholar
Wolpert, D. H. 1994. Stacked generalization. Neural Networks 5(6), 12891301.Google Scholar
Xu, W., Meiyun, Z., Mingtao, Z., He, R. 2010. Constraint bagging for stock price prediction using neural networks. In International Conference on Modelling, Identification and Control, Okayama, Japan, July 17–19, 606–610.Google Scholar
Xu, X., Zhang, A. 2006. Boost feature subset selection: a new gene selection algorithm for microarray data set. In International Conference on Computational Science, UK, 670–677.Google Scholar
Yang, L., Gong, W., Gu, X., Li, W., Liu, Y. 2009. Bagging null space locality preserving discriminant classifiers for face recognition. Pattern Recognition 42, 18531858.Google Scholar
Yasumura, Y., Kitani, N., Uehara, K. 2005. Integration of bagging and boosting with a new reweighting technique. In International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC05), Vienna, Austria, 338–343.Google Scholar
Yi, X.-C., Ha, Z., Liu, C.-P. 2004. Selective bagging based incremental learning. In Third International Conference on Machine Learning and Cyhemetics, Shanghai, August 26–29, 2412–2417.Google Scholar
Yin, H., Dong, H. 2011. The problem of noise in classification: Past, current and future work. In 2011 IEEE 3rd International Conference on Communication Software and Networks (ICCSN), May 27–29, 412–416.Google Scholar
Yin, X.-C., Liu, C.-P., Zhi, H. 2005. Feature combination using boosting. Pattern Recognition Letters 26, 21952205.Google Scholar
Zaman, F., Hirose, H. 2008. A robust bagging method using median as a combination rule. In IEEE 8th International Conference on Computer and Information Technology Workshops, Dhaka, Bangladesh, 55–60.Google Scholar
Zhang, C. X., Zhang, J. S. 2008a. A local boosting algorithm for solving classification problems. Computational Statistics & Data Analysis 52(4), 19281941.Google Scholar
Zhang, C. X., Zhang, J. S. 2008b. RotBoost: a technique for combining Rotation Forest and AdaBoost. Pattern Recognition Letters 29, 15241536.Google Scholar
Zhang, C. X., Zhang, J. S., Zhang, G.-Y. 2008. An efficient modified boosting method for solving classification problems. Journal of Computational and Applied Mathematics 214, 381392.Google Scholar
Zhang, C. X., Zhang, J. S., Zhang, G.-Y. 2009. Using boosting to prune double-bagging ensembles. Computational Statistics and Data Analysis 53, 12181231.Google Scholar
Zhang, D., Zhou, X., Leung, S., Zheng, J. 2010. Vertical bagging decision trees model for credit scoring. Expert Systems with Applications 37, 78387843.Google Scholar
Zhou, Z.-H., Wu, J., Tang, W. 2002. Ensembling neural networks: many could be better than all. Artificial Intelligence 137(1–2), 239263.Google Scholar
Zhou, Z. H., Yu, Y. 2005a. Adapt bagging to nearest neighbor classiers. Journal of Computer Science and Technology 20(1), 4854.Google Scholar
Zhou, Z. H., Yu, Y. 2005b. Ensembling local learners through multimodal perturbation. IEEE Transactions on Systems, Man, and Cybernetics Part B:Cybernetics 35(4), 725735.Google Scholar
Zhu, X., Yang, Y. 2008. A lazy bagging approach to classification. Pattern Recognition 41, 29802992.Google Scholar