To train the change detector, bi-temporal images taken at different times in the same area are used. However, collecting labeled bi-temporal images is expensive and time consuming. To solve this problem, various unsupervised change detection methods have been proposed, but they still require unlabeled bi-temporal images. In this paper, we propose unsupervised change detection based on image reconstruction loss using only unlabeled single temporal single image. The image reconstruction model is trained to reconstruct the original source image by receiving the source image and the photometrically transformed source image as a pair. During inference, the model receives bitemporal images as the input, and tries to reconstruct one of the inputs. The changed region between bi-temporal images shows high reconstruction loss. Our change detector showed significant performance in various change detection benchmark datasets even though only a single temporal single source image was used. The code and trained models will be publicly available for reproducibility.
CDRL is trained to reconstruct Xt1 by receiving a pseudo-unchanged pair during training, and when a changed bi-temporal pair that is not learned during training is input during inference, the reconstruction loss is large in the region with large structure change.
Result videos are the results of the diffence map for each threshold. We used a threshold of 0.7.
If you want to cite our work, please use:
@InProceedings{Noh2022CDRL, author = {Hyeoncheol Noh, Jingi Ju, Minseok Seo, Jongchan Park, Dong-Geol Choi}, title = {Unsupervised Change Detection Based on Image Reconstruction Loss}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, year = {2022}, }