Learning Object Color Models from Multi-view Constraints

Citation:

Owens T, Saenko K, Chakrabarti A, Xiong Y, Zickler T, Darrell T. Learning Object Color Models from Multi-view Constraints. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Colorado Springs, CO; 2011.

Abstract:

Color is known to be highly discriminative for many object recognition tasks, but is difficult to infer from uncontrolled images in which the illuminant is not known. Traditional methods for color constancy can improve surface reflectance estimates from such uncalibrated images, but their output depends significantly on the background scene. In many recognition and retrieval applications, we have access to image sets that contain multiple views of the same object in different environments; we show in this paper that correspondences between these images provide important constraints that can improve color constancy. We introduce the multi-view color constancy problem, and present a method to recover estimates of underlying surface reflectance based on joint estimation of these surface properties and the illuminants present in multiple images. The method can exploit image correspondences obtained by various alignment techniques, and we show examples based on matching local region features. Our results show that multi-view constraints can significantly improve estimates of both scene illuminants and object color (surface reflectance) when compared to a baseline single-view method.

mvcc.pdf4.01 MB