SIMULTANEOUS LOCALIZATION AND MAPPING IN INDOOR ENVIRONMENTS USING SIFT FEATURES

A. Gil, L. Paya, O. Reinoso, C. Fernandez, R. Puerto
Departamento de Ingeniería de Sistemas Industriales
Universidad Miguel Hernández
Avda. Universidad s/n
03202 Elche Alicante
E-mail: arturo.gil@umh.es

Keywords: SLAM, Stereo Vision, Visual landmarks, Data Association

Abstract: We consider the problem of building a map of an unmodified enviornment using only visual information extracted from cameras. In order to build a map, we must estimate both robot's location and a map of its sorroundings enviornment. In general, this problem is known as Simultaneous Localization and Mapping (SLAM). It is an inherently hard problme because noise in the estimate of the robot's pose leads to noise in the estimate of the map and vice versa. Past work on this area has centered on building maps using distance sensors (i.e. laser and SONAR sensors). However in our case, we propose a method to build a map based only on visual information. While the robot moves along the environment, it extracts a number of interesting points from images (i.e. corners) and calculates a relative measurement vector Vr=(Xr,Yr,Zr) to each one of them using stereo vision. We are using SIFT features as the relevant points extracted from images. SIFT features, are said to be invariant to image translation, sacling and rotation and partially invariant to illumination changes and affine projection. In consequence those points are suitable for localizating the robot in a particular environment. Our map consist of a number of L three dimensional landmarks referred to a global frame Sg. In addition, each of the 3D landmarks is assinged a SIFT descriptor that enables us to partially diferentiate that particular landmark from the rest. Our approach to SLAM is based on a Rao-Blackwellised Particle Filter. This permits us to separate the estimation process in two parts: On the one hadn, we estimate the path of the robot using a particle filter and estimate the map contitioned to each path of the robot. We present experimental results that validate our approach to vision-base SLAM in large unmodified environments.