Modeling Environments Hierarchically with Omnidirectional Imaging and Global-Appearance Descriptors
L. Payá, A. Peidró, F. Amorós, D. Valiente, O. Reinoso
Remote Sensing  (2018)
Ed. MDPI  ISSN:2072-4292  DOI:10.3390/rs10040522  - 10(4), 522

Abstract:

In this work, a framework is proposed to build topological models in mobile robotics, using an omnidirectional vision sensor as the only source of information. The model is structured hierarchically into three layers, from one high-level layer which permits a coarse estimation of the robot position to one low-level layer to refine this estimation efficiently. The algorithm is based on the use of clustering approaches to obtain compact topological models in the high-level layers, combined with global appearance techniques to represent robustly the omnidirectional scenes. Compared to the classical approaches based on the extraction and description of local features, global-appearance descriptors lead to models that can be interpreted and handled more intuitively. However, while local-feature techniques have been extensively studied in the literature, global-appearance ones require to be evaluated in detail to test their efficacy in map-building tasks. The proposed algorithms are tested with a set of publicly available panoramic images captured in realistic environments. The results show that global-appearance descriptors along with some specific clustering algorithms constitute a robust alternative to create a hierarchical representation of the environment.