Pseudo-Label Guided Contrastive Learning for Semi-Supervised Medical Image Segmentation

Date of Event

Abstract: Although recent works in semi-supervised learning (SemiSL) have accomplished significant success in natural image segmentation, the task of learning discriminative representations from limited annotations has been an open problem in medical images. Contrastive Learning (CL) frameworks use the notion of similarity measure which is useful for classification problems, however, they fail to transfer these quality representations for accurate pixel-level segmentation. To this end, we propose a novel semi-supervised patch-based CL framework for medical image segmentation without using any explicit pretext task. We harness the power of both CL and SemiSL, where the pseudo-labels generated from SemiSL aid CL by providing additional guidance, whereas discriminative class information learned in CL leads to accurate multi-class segmentation. Additionally, we formulate a novel loss that synergistically encourages inter-class separability and intra-class compactness among the learned representations. A new inter-patch semantic disparity mapping using average patch entropy is employed for a guided sampling of positives and negatives in the proposed CL framework. Experimental analysis on three publicly available datasets of multiple modalities reveals the superiority of our proposed method as compared to the state-of- the-art methods. Code is available at:

Basak, Hritam, and Zhaozheng Yin. 2023. “Pseudo-Label Guided Contrastive Learning for Semi-Supervised Medical Image Segmentation.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19786–97.