In order to be truly autonomous, a mobile robot should be capable of navigating through any kind of environment while carrying out a task. In order to do that it is considered necessary that the robot possesses the ability to create a model of its workspace that allows to estimate its position inside it and navigate along a trajectory.
Map building and navigation is currently a very active research area, in which a large number of researchers focus on and where very different approaches have emerged based on diverse algorithms and using various kind of sensorial information. To the present days, most of the efforts have been focusing on construction of models of the environment based on a set of significant points extracted from it without considering the global appearance of the scene.
Considering the concepts posed above, we propose the improvement and development of new mechanisms that allow an efficient, robust and precise modelling of the environment by making use of omnidirectional vision systems. The research group has experience in the mentioned areas and during the last years has developed different approaches in the areas of map building, localization, exploration and SLAM by means of information gathered by different kind of vision systems installed on the robots. In order to carry out these approaches, an extensive study of the different description methods has been performed, both based on the extraction of significant points and local descriptors and also those methods based on the global appearance of the image, with remarkable results.
|