SLAM Algorithm by using global appearance of omnidirectional images
Y. Berenguer, L. Payá, A. Peidró, O. Reinoso
Proceedings of the 14th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2017) (Madrid (SPAIN), 26-28 July 2017)
Ed. SCITEPRESS ISBN:978-989-758-264-6 DOI:https://doi.org/10.5220/0006434503820388 - Vol. 2, pp. 382-388
This work presents a SLAM algorithm to estimate the position and orientation of a mobile robot while simultaneously
creating the map of the environment. It uses only visual information provided by a catadioptric system
mounted on the robot formed by a camera pointing towards a convex mirror. It provides the robot with omnidirectional
images that contain information with a field of view of 360 degrees around the camera-mirror axis.
Each omnidirectional scene acquired by the robot is described using global appearance descriptors. Thanks to
their compactness, this kind of descriptors permits running the algorithm in real time. The method consists of
three different steps. First, the robot calculates the pose of the robot (location and orientation) and creates a
new node in the map. This map is formed by connected nodes between them. Second, it detects loop closures
between the new node and the nodes of the map. Finally, the map is optimized by using an optimization
algorithm and the detected loop closures. Two different sets of images have been used to test the effectiveness
of the method. They were captured in two real environments, while the robot traversed two paths. The results
of the experiments show the effectiveness of our method.