Exploring the auxiliary learning for long-tailed visual recognition
Real-world visual data often exhibits a long-tailed distribution, where some “head” classes have a large number of samples, yet only a few samples are available for “tail” classes. The fundamental problem of learning with the imbalanced data is that insufficient training samples easily lead to the over-fitting of feature extractor and classifier for tail classes, which can be boiled down into a dilemma: on the one hand, we prefer to increase the exposure of tail class samples to avoid the excessive dominance of head classes in the classifier training. On the other hand, oversampling tail classes makes the network prone to over-fitting, since head class samples are often consequently under-represented. To resolve this dilemma, in this paper, we propose an effective auxiliary learning approach. The key idea is to split a network into a classifier part and a feature extractor part, and then employ different training strategies for each part in an auxiliary learning manner. Specifically, to promote the awareness of tail-classes, a class-balanced sampling scheme is utilised for training both the classifier and the feature extractor as the primary task. For the feature extractor, we also introduce an auxiliary training task, which is to train a classifier under the regular random sampling scheme. In this way, the feature extractor is jointly trained from both sampling strategies and thus can take advantage of all training data and avoid the over-fitting issue. Apart from this basic auxiliary task, we further explore the benefits of different types of auxiliary tasks for improving the generality of learned features, including self-supervised learning and class-wise re-weighting. Without using any bells and whistles, our model compares favourably over state-of-the-art solutions.
Open Access Status
This publication is not available as open access
Australian Research Council