Building Visual Maps With a Single Omnidirectional Camera
This paper describes an approach to the Simultaneous Localization and Mapping (SLAM) problem using a single omnidirectional camera. We consider that the robot is equipped with a catadioptric sensor and is able to extract interest points from the images. In the approach, the map is represented by a set of omnidirectional images and their positions. Each omnidirectional image has a set of interest points and visual descriptors associated to it. When the robot captures an omnidirectional image it extracts interest points and finds correspondences with the omnidirectional images stored in the map. If a sufficient number of points are matched, a translation and rotation can be computed between the images, thus allowing the localization of the robot with respect to the images in the map. Typically, visual SLAMapproaches concentrate on the estimation of a set of visual landmarks, each one defined by a 3D position and a visual descriptor. In contrast with these approaches, the solution presented here simplifies the computation of the map and allows for a compact representation of the environment. We present results obtained in a simulated environment that validate the SLAM approach. In addition, we present results obtained using real data that demonstrate the validity of the proposed solution.