Trajectory estimation and optimization through loop closure detection using omnidirectional imaging and global-appearance descriptors
Currently, the range of applications of mobile robots has extended substantially thanks to the evolution of the sensing and computing technologies. In this field, creating accurate and compact models of the environment is crucial so that the robot can estimate its position and move autonomously to the target points. Among the available alternatives, computer vision sensors have become of utmost importance to create these models, thanks to the richness of the data they can capture. However, they require the im- plementation of algorithms to extract relevant information from the scenes. In this work, a framework to create a model of a priori unknown environments is presented, which is based on the global appearance of images. The model is created on-line, as the robot explores the environment and the result is a graph whose nodes contain images and the links represent relative distance between them. The framework includes a schema that fuses the information extracted from the scenes with the angle information pro- vided by the odometry of the robot, considering the relative reliability of each piece of information. Also, a loop closure detection algorithm is proposed, which corrects the position of the nodes and updates the map. A set of experiments has been conducted to study the influence of the most relevant parameters upon the accuracy of the model and the computational cost of the process.