Appendix
Published online by Cambridge University Press: 05 June 2012
Summary
Software used in this book
Classification and Regression Tree; CART
We used the Matlab functions treefit and treeval for learning and prediction, respectively. We use Gini's diversity index as our splitting criterion. But see also Note 1(c) at the end of Chapter 7.
k-Nearest Neighbor; k-NN
k-NN algorithms are relatively simple to implement, but the best are truly fast implementations. We used several implementations and list two that are available at Matlab Central: an implementation by Yi Cao (at Cranfield University on 25 March 2008) called Efficient K-Nearest Neighbor Search using JIT http://www.mathworks.com/matlabcentral/fileexchange/19345-efficient-k-nearest-neighbor-search-using-jit and an implementation by Luigi Giaccari called Fast k-Nearest Neighbors Search http://www.mathworks.es/matlabcentral/fileexchange/22190.
Support Vector Machines; SVM
We used the implementation SVMlight that can be found at http://svmlight.joachims.org/.
A number of other software packages for SVMs can be found at http://www.support-vector-machines.org/SVM_soft.html.
- Type
- Chapter
- Information
- Statistical Learning for Biomedical Data , pp. 263 - 270Publisher: Cambridge University PressPrint publication year: 2011
- 1
- Cited by