View-based SLAM in the 2-1/2D Space with Omnidirectional Images and Feature Point Information
Dr. David Valiente García

Mobile robotics has experienced an important proliferation in the recent days, with
many fields of application available. A great variety of different mobile robots are
present in different sectors of society, most of which are designated as autonomous.
This term implies that the robot manages to operate itself, without any special supervision.
To that purpose, the robot must be enabled to gather information from the
environment in order to build its own understanding, which yields a map estimation.
The scope of this thesis is focused on this aspect: the map building process with visual
information from the environment. This process entails a non-trivial task, since it poses
a challenge when it comes to obtaining a simultaneous estimation of the localization
of the robot, and also of the map. This leads to one of the most essential paradigms
in such context: the problem of SLAM (Simultaneous Localization And Mapping).
Different sort of information can be acquired by a set of well known sensors boarded
on the robot, such as laser, sonar, GPS, etc. However, digital cameras have arisen as
a promising alternative. They provide low consumption, low cost and lightness. Moreover,
these visual sensors represent a potential tool for encoding large amounts of
information within an only image. Thus in this work we propose a new map model,
embedded in a visual SLAM approach, which is solely based on the use of omnidirectional
images acquired with a monocular camera. An important strength of this camera
resides on its particular wide field of view. In addition, we process the information extracted
from feature points, as physical landmarks which are visually detected on the
images. This idea differs from traditional approaches, which basically concentrate on
the accumulative scheme for the incremental re-estimation of all the landmarks in the
map.
Regarding the core algorithms under this context of visual SLAM, this thesis proposes
several improvements to the robustness of the standard algorithm models. In
particular, we present a customized offline model, which is capable of reducing those
harmful effects associated with non-linear noise, as those introduced by catadioptric
cameras. Many of the most accepted approaches are highly sensitive to this effects
and fail to provide convergence assurance for the final estimation.
Moreover, another recognized drawback of former approaches is the management
of the uncertainty of the system. This is usually originated by the same non-linear
sources. Consequently, the estimation may be severely impaired as errors dramatically
compromise its convergence. In this sense, this thesis contributes to the achievement
of a robust model for uncertainty reduction, which is dynamically devised.
As a general commitment along all this thesis, we establish an experimental framework
for all the different approaches and contributions made as a result of the research
conducted in this context. Thus both simulated and real dataset experiments are
repeatedly presented along this document.