Abstract:
Semantic segmentation is a computer vision technology widely used in scenarios such as unmanned driving and defect detection, but the fine-grained annotation at the pixel level requires a huge annotation cost. Therefore, how to use the easily obtained image-level labels for weakly supervised semantic segmentation is the focus of long-standing research. Compared with pixel-level segmentation based on a class activation maps (CAM), a masked consistency mechanism (MCM) was proposed to provide additional supervision signals to narrow the gap between full supervision and weakly supervision. In the fully supervised semantic segmentation, the network had consistent pixel-level segmentation supervision for mask prediction of each patch of the image, so some patches were masked out in vision transformer (ViT) and it was required that the CAMs generated by the retained patches should be consistent with the CAMs generated by the complete images to provide additional self-supervision signals for network training. Experiments on PASCAL VOC 2012 and MS COCO show that the proposed method is superior to the most advanced method using the same level of supervision.