Holistic descriptors of omnidirectional color images and their performance in estimation of position and orientation
The use of visual sensors in robotic navigation tasks is a common approach, and numerous examples can be found in the literature. This work focuses on the problem of map building and localization using omnidirectional images as the only source of information. The main objective of this paper is to present a thorough comparison of global-appearance description techniques including the use of color information in different approaches. Some of the descriptors have been widely tested in previous works using gray-level images. In the present work we concentrate on the role and efficiency of the color information. Other descriptors are presented for the first time. To carry out this study, a database captured in different areas of an office environment is used, including two different datasets: training and test datasets. The experimental results include computational requirements in the map building and localization processes, and the accuracy in the pose estimation of the test images in a topological map, separating both position and orientation. To complete the study, the behavior of the descriptors is tested when the images present noise or occlusions, specially the effect on the color information.