Practical applications call for efficient model selectioncriteria for multiclass support vector machine (SVM)classification. To solve this problem, this paper develops two modelselection criteria by combining or redefining the radius–marginbound used in binary SVMs. The combination is justified bylinking the test error rate of a multiclass SVM with that of a set ofbinary SVMs. The redefinition, which is relatively heuristic, is inspiredby the conceptual relationship between the radius–marginbound and the class separability measure. Hence, the two criteriaare developed from the perspective of model selection rather thana generalization of the radius–margin bound for multiclass SVMs.As demonstrated by extensive experimental study, the minimizationof these two criteria achieves good model selection on mostdata sets. Compared with the k-fold cross validation which isoften regarded as a benchmark, these two criteria give rise tocomparable performance with much less computational overhead,particularly when a large number of model parameters are to beoptimized.