MutexMatch: Semi-Supervised Learning With Mutex-Based Consistency Regularization

Publication Name

IEEE Transactions on Neural Networks and Learning Systems

Abstract

The core issue in semi-supervised learning (SSL) lies in how to effectively leverage unlabeled data, whereas most existing methods tend to put a great emphasis on the utilization of high-confidence samples yet seldom fully explore the usage of low-confidence samples. In this article, we aim to utilize low-confidence samples in a novel way with our proposed mutex-based consistency regularization, namely MutexMatch. Specifically, the high-confidence samples are required to exactly predict “what it is” by the conventional true-positive classifier (TPC), while low-confidence samples are employed to achieve a simpler goal—to predict with ease “what it is not” by the true-negative classifier (TNC). In this sense, we not only mitigate the pseudo-labeling errors but also make full use of the low-confidence unlabeled data by the consistency of dissimilarity degree. MutexMatch achieves superior performance on multiple benchmark datasets, i.e., Canadian Institute for Advanced Research (CIFAR)-10, CIFAR-100, street view house numbers (SVHN), self-taught learning 10 (STL-10), and mini-ImageNet. More importantly, our method further shows superiority when the amount of labeled data is scarce, e.g., 92.23% accuracy with only 20 labeled data on CIFAR-10. Code has been released at https://github.com/NJUyued/MutexMatch4SSL.

Open Access Status

This publication may be available as open access

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1109/TNNLS.2022.3228380