Hierarchical Localization in Topological Models Under Varying Illumination Using Holistic Visual Descriptors
In this paper, a hierarchical localization framework within indoor environments is proposed and evaluated, considering severe variations of the illumination conditions. The only source of information both to build a model of the environment and to solve the localization problem is a catadioptric vision system, which is mounted on the mobile robot. The images captured by this system are processed globally to obtain holistic descriptors. The position of the robot is estimated by comparing these descriptors with the information contained in a topological visual model, which is previously created using a clustering approach and is composed of a hierarchy of layers. Compacting the information via clustering proves to be an efficient alternative to estimate the position of the robot hierarchically and with robustness. The proposed localization strategy is tested with some sets of panoramic images, captured in large indoor environments under real operating conditions, including illumination changes that change substantially the appearance of the scenes. The results show a reasonable tradeoff computation time-accuracy when the localization is addressed in a hierarchical way.