Generation of panoramic views and localization of a mobile robot using a system with a 360° field of view.
Dra. Marķa Flores Tenza

Nowadays, mobile robots operate in several fields, not only in the industry or in
those which are dangerous to humans (such as planetary or mine explorations), but also
in our daily lives (such as in restaurants or even at our homes). In many situations,
they must carry out a specific task while navigating autonomously. To achieve this
safely, the mobile robot must know its environment suciently and be able to localize
itself within it.
 
Following the above paragraph, the autonomous navigation involves solving a set
of tasks such as localization, mapping or planning trajectories while avoiding obstacles.
Information about the environment must be available for the mobile robot during the
navigation. Therefore, the mobile robot must carry certain sensors on board based
on the required amount and type of information. Among all the sensors used in the
mobile robotics field, the vision systems are being widely employed in order to achieve
autonomous navigation. The reason is that a unique image is capable of providing a
wide variety of information (e.g. texture or color) and it can be employed to solve the
localization problem. Furthermore, they are a suitable solution for dierent environment
types (aquatic, aerial, terrestrial or indoor/outdoor). In terms of amount of information,
the omnidirectional vision systems are capable of providing a field of view of 360¶ around
the mobile robot.
 
The present thesis has two main objectives. On the one hand, some contributions
to improve a probabilistic localization algorithm are proposed to achieve a more robust
estimation of the relative pose from a pair of wide field-of-view images. On the other
hand, an algorithm is presented to generate a full spherical view from a pair of fisheye
images, including some correction steps that improve the quality of the resulting image.
These two objectives are described below in more detail.
Regarding the first objective, the localization task can be addressed as a visual
odometry problem. There are dierent approaches to carry out this, but the one
chosen in this thesis is the approach based on local characteristic points. In this case,
the algorithm has implemented a search of feature matches between the image captured
at the present instant time and the one captured at the previous instant time. Then,
the set of local feature points is employed to estimate the essential matrix, which can be
decomposed into a rotation matrix and a translation vector (except for a scale factor).
 
In this work, we propose to implement a matching search based on a probabilistic
model of the environment to improve the visual odometry algorithm and we perform a
comparative evaluation of wide-field-of-view vision systems in this task.
The second objective refers to the vision system used in this work, which is composed of two back-to-back CMOS sensors and two fisheye lenses with a field of view
greater than 180º each. This configuration permits generating a whole field of view,
360º horizontally and 180º vertically. The latter is a valuable advantage, especially
for autonomous navigation. However, it also requires precise processing to blend both
images and create a complete panoramic view of the environment and it may lead to
errors and artifacts in the resulting image, mainly in the overlapping areas. In view
of the above, this work aims to minimize the alignment error before calculating the
geometric transformation through several contributions which include some correction
steps. The experimental section proves that they are eective to generate a high-quality
panoramic view.