Foreground-aware Image Inpainting

Wei Xiong1, Jiahui Yu2, Zhe Lin3, Jimei Yang3, Xin Lu3, Connelly Barnes3 and Jiebo Luo1

 1University of Rochester

 2University of Illinois at Urbana-Champaign

  3Adobe Research


Abstract

Existing image inpainting methods typically fill holes by borrowing information from surrounding pixels. They often produce unsatisfactory results when the holes overlap with or touch foreground objects due to lack of information about the actual extent of foreground and background regions within the holes. These scenarios, however, are very important in practice, especially for applications such as the removal of distracting objects. To address the problem, we propose a foreground-aware image inpainting system that explicitly disentangles structure inference and content completion. Specifically, our model learns to predict the foreground contour first, and then inpaints the missing region using the predicted contour as guidance. We show that by such disentanglement, the contour completion model predicts reasonable contours of objects, and further substantially improves the performance of image inpainting. Experiments show that our method significantly outperforms existing methods and achieves superior inpainting results on challenging cases with complex compositions.


Network Architecture

framework

The overall architecture of our model.

Contour Detection Module: we first detect the regions of salient objects in the image. Note that the objects can be corrupted with holes, which makes the saliency object segmentation more challenging. Then we obtain the contour of the object regions.

Contour Completion Module: in this stage, we complete the detected contours with semantic understanding on the objects. We use a GAN-based network to synthesize clean and reasonable contours.  To stabilize the training, we adopt curriculum training, i.e., we first pretrain the generator without the adversarial loss, then finetune the model with small weights on the adversarial loss, then finetune the model with larger weights. In this way, the model can progressively learn to predict the missing contour.

Image Completion Module: with the guidance of the completed contour, we adopt an image inpainting model to complete the missing image regions with reasonable contents. A GAN-based network is also adopted.


Results

Intuitive Comparison

teaser

Qualitative Comparison

quality

Contour Detection and Completion

contours

Dataset

We release an irregular hole mask dataset for image inpainting. It contains 100,000 masks with irregular holes for training, and 10,000 masks for testing. Each mask is a 256×256 gray image with 255 indicating the hole pixels and 0 indicating the valid pixels. Random flip is needed when using the masks for training. Download link: irregular_mask.zip.


Citation

@InProceedings{Xiong_2019_CVPR,
author = {Xiong, Wei and Yu, Jiahui and Lin, Zhe and Yang, Jimei and Lu, Xin and Barnes, Connelly and Luo, Jiebo},
title = {Foreground-Aware Image Inpainting},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}

You can find our paper here: [pdf].


FAQ

Q: Is the code available?

A: Currently we are unable to publish the source code. However, we can run the experiments for you if the input images and masks are provided.

Q: Is the saliency dataset available?

A: Unfortunately, we are unable to share the dataset. The saliency dataset is composed of two public datasets (DUT-OMRON, MSRA1000), and images that are modified/annotated by Adobe. Due to the privacy policy of Adobe, currently we cannot release the dataset. You can use the two aforementioned public datasets instead.