A distance metric that can accurately re°ect the intrinsic characteristics of data is critical for visual recognition tasks. An e®ective solution to de¯ning such a metric is to learn it from a set of training sam- ples. In this work, we propose a fast and scalable algorithm to learn a Ma- halanobis distance. By employing the principle of margin maximization to secure better generalization performances, this algorithm formulates the metric learning as a convex optimization problem with a positive semide¯nite (psd) matrix variable. Based on an important theorem that a psd matrix with trace of one can always be represented as a convex combination of multiple rank-one matrices, our algorithm employs a dif- ferentiable loss function and solves the above convex optimization with gradient descent methods. This algorithm not only naturally maintains the psd requirement of the matrix variable that is essential for met- ric learning, but also signi¯cantly cuts down computational overhead, making it much more e±cient with the increasing dimensions of fea- ture vectors. Experimental study on benchmark data sets indicates that, compared with the existing metric learning algorithms, our algorithm can achieve higher classi¯cation accuracy with much less computational load.