Application of the Global Appearance of Omnidirectional Visual Information in Colour to Robotic Navigation Tasks in 2˝D Space
Dr. Francisco Javier Amorós Espí

The autonomous navigation of a robot requires of the knowledge of the environment it is surrounded with. The robot can use different sensors to gather the information around its position. This information must be processed and interpreted by the robot in order to perform its task, which directly depends on the kind of sensor used. 
    Among the multiple sensors the robot can be equipped with, visual systems stand out. These sensors are light, have reduced energy consumption and multiple configuration options, which make them suitable for almost any application and means of navigation. Moreover, images provide very rich information, which can be used in different ways. 
Considering visual navigation, during the last decades feature-based approximations have stood out in order to describe the visual information. These techniques use the image segmentation or the extraction of distinctive points, also known as landmarks.
    This thesis suggests the application of global-appearance to the visual information in order to obtain image descriptors. Unlike feature-based methods, these techniques process the image as a whole, without considering the scene contents. 
    Global-appearance descriptors present an important advantage in unstructured environments, where landmark recognition might be difficult. However, as they work with the whole image, it is important to find methods that process images efficiently and describe them in few terms. These descriptors have demonstrated their utility in navigation tasks, allowing the pose estimation in visual maps. The great majority of the proposals use the visual information of a single channel, which corresponds to the grey-scale image. 
    For that reason, a comparison of different global-appearance techniques using the colour in different fashions is carried out. This study covers the computational requirements and the precision of the pose estimation in a dense map, also simulating different noise and occlusion situations in the test scenes. 
    Once the abilities of characterization and distinction of images have been proved, our intention is to use these techniques to extract information regarding the relative position of two scenes captured closely. Specifically, three different situations are suggested to apply the global appearance to visual navigation tasks. 
    First, we use the information provided by projective images to select and order nodes distributed along the navigation area, making up the map of the environment. To carry out this goal, the system estimates the relative displacement between two consecutive images. This is achieved by means of the Multiscale Analysis, defining the scales as artificial zooms of the original scene. With this study, we aim to demonstrate that, apart from the possibility of obtaining measures of the image displacement, the applicability and performance of the global appearance in non-omnidirectional images. 
    The objective of the second study is to adapt the Multiscale Analysis to omnidirectional scenes. Combining omnidirectional information and the Multiscale Analysis, an odometry visual system is proposed, using the global appearance of scenes. Afterwards, the topological visual odometry is used in visual path estimation, considering also loop closures to improve the initial estimations. 
    Finally, we address the problem of vertical displacement estimation between scenes using visual global appearance. The increasing interest in Unmanned Aerial Vehicles (AUV's) as a navigation platform, combined with visual sensors, encourages us to study the application of omnidirectional information to obtain a topological height estimator.
    All the proposals are validated by means of experiments that use our own image database captured in real environments.