An evaluation between global appearance descriptors based on analytic methods and deep learning techniques for localization in autonomous mobile robots
S. Cebollada, L. Payá, D Valiente, X. Jiang, O. Reinoso
Proceedings of the 16th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2019)  (Praga (República Checa), 2019)
Ed. SCITEPRESS, Science and Technology Publications  ISBN:978-989-758-380-3  DOI:10.5220/0007837102840291  - Vol 2. 284-291


In this work, different global appearance descriptors are evaluated to carry out the localization task, which is a crucial skill for autonomous mobile robots. The unique information source used to solve this issue is an omnidirectional camera. Afterwards, the images captured are processed to obtain global appearance descriptors. The position of the robots is estimated by comparing the descriptors contained in the visual model and the descriptor calculated for the test image. The descriptors evaluated are based on (1) analytic methods (HOG and gist) and (2) deep learning techniques (auto-encoders and Convolutional Neural Networks). The localization is tested with a panoramic dataset which provides indoor environments under real operating conditions. The results show that deep learning based descriptors can be also an interesting solution to carry out visual localization tasks.