Miss detection vs. false alarm: Adversarial learning for small object segmentation in infrared images



Publication Details

Wang, H., Zhou, L. & Wang, L. (2019). Miss detection vs. false alarm: Adversarial learning for small object segmentation in infrared images. Proceedings of the IEEE International Conference on Computer Vision (pp. 8508-8517).


© 2019 IEEE. A key challenge of infrared small object segmentation (ISOS) is to balance miss detection (MD) and false alarm (FA). This usually needs ''opposite'' strategies to suppress the two terms, and has not been well resolved in the literature. In this paper, we propose a deep adversarial learning framework to improve this situation. Departing from the tradition of jointly reducing MD and FA via a single objective, we decompose this difficult task into two sub-tasks handled by two models trained adversarially, with each focusing on reducing either MD or FA. Such a new design brings forth at least three advantages. First, as each model focuses on a relatively simpler sub-task, the overall difficulty of ISOS is somehow decreased. Second, the adversarial training of the two models naturally produces a delicate balance of MD and FA, and low rates for both MD and FA could be achieved at Nash equilibrium. Third, this MD-FA detachment gives us more flexibility to develop specific models dedicated to each sub-task. To realize the above design, we propose a conditional Generative Adversarial Network comprising of two generators and one discriminator. Each generator strives for one sub-task, while the discriminator differentiates the three segmentation results from the two generators and the ground truth. Moreover, in order to better serve the sub-tasks, the two generators, based on context aggregation networks, utilzse different size of receptive fields, providing both local and global views of objects for segmentation. As verified on multiple infrared image data sets, our method consistently achieves better segmentation than many state-of-the-art ISOS methods.

Please refer to publisher version or contact your library.



Link to publisher version (DOI)