Document Type

Journal Article

Publication Details

Wang, L., kim, J. & Wang, L. (2010). Scalable large-margin mahalanobis distance metric learning. IEEE Transactions on Neural Networks, 21 (9), 1524-1530.

Abstract

For many machine learning algorithms such ask-nearest neighbor (k-NN) classifiers and k-means clustering,often their success heavily depends on the metric used to calculatedistances between different data points. An effective solution fordefining such a metric is to learn it from a set of labeled trainingsamples. In this work, we propose a fast and scalable algorithm tolearn a Mahalanobis distance metric. The Mahalanobis metriccan be viewed as the Euclidean distance metric on the inputdata that have been linearly transformed. By employing theprinciple of margin maximization to achieve better generalizationperformances, this algorithm formulates the metric learningas a convex optimization problem and a positive semidefinite(p.s.d.) matrix is the unknown variable. Based on an importanttheorem that a p.s.d.trace-one matrix can always be representedas a convex combination of multiple rank-one matrices, ouralgorithm accommodates any differentiable loss function andsolves the resulting optimization problem using a specializedgradient descent procedure. During the course of optimization,the proposed algorithm maintains the positive semidefinitenessof the matrix variable that is essential for a Mahalanobis metric.Compared with conventional methods like standard interior-pointalgorithms [2] or the special solver used in large margin nearestneighbor [24], our algorithm is much more efficient and has abetter performance in scalability. Experiments on benchmarkdata sets suggest that, compared with state-of-the-art metriclearning algorithms, our algorithm can achieve a comparableclassification accuracy with reduced computational complexity.

RIS ID

54072

Share

COinS