Performance of New Global Appearance Description Methods in Localization of Mobile Robots
V. Román, L. Payá, M. Flores, S. Cebollada, O. Reinoso
Robot 2019, Advances in Intelligent Systems and Computing  (2019)
Ed. Springer  ISBN:978-3-030-36149-5  DOI:  - 2 (351-363)


Autonomous robots should be able to perform localization and map creation robustly. In order to solve these problems many studies and techniques have been evaluated over the past few years. This work focuses on the use of an omnidirectional vision sensor and global appearance techniques to describe each image. Global-appearance techniques consist in obtaining a unique vector that describes globally the panoramic image. Once the images have been described the mobile robot can use these descriptors both to create a map of the environment or to estimate its position and orientation in the environment. The main objective of this work is to propose and test new alternatives to describe scenes globally. The results will be used to propose new robust methods to estimate the position and orientation of the robot, from the combination of several measurements of similitude of visual information. Therefore, the present work is an initial study towards a new localization method. In this initial study a comparative the previous and the new methods is performed. The experiments will be carried out with real images that have been taken in an heterogeneous scenario where simultaneously humans and robots work together. For this reason, variations of the lighting conditions, people who occlude the scene and changes on the furniture may appear.