Fusing Omnidirectional Visual Data for Probability Matching Prediction
David Valiente, Luis Payá; Luis M. Jiménez, Jose M. Sebastián, Oscar Reinoso
ACIVS: International Conference on Advanced Concepts for Intelligent Vision Systems  (Poitiers, Francia, 24-27 septiembre 2018)
Ed. Springer  ISBN:978-3-030-01448-3  ISSN:0302-9743  DOI:https://doi.org/10.1007/978-3-030-01449-0  - 11182, 571-583


This work presents an approach to visual data fusion with omnidirectional imaging in the field of mobile robotics. An inference framework is established through Gaussian processes (GPs) and Information gain metrics, in order to fuse visual data between poses of the robot. Such framework permits producing a probability distribution of feature matching existence in the 3D global reference system. Designed together with a filter-based prediction scheme, this strategy allows us to propose an improved probability-oriented feature matching, since the probability distribution is projected onto the image in order to predict relevant areas where matches are more likely to appear. This approach reveals to improve standard matching techniques, since it confers adaptability to the changing visual conditions by means of the Information gain and probability encodings. Consequently, the output data can feed a reliable visual localization application. Real experiments have been produced with a publicly-available dataset in order to confirm the validity and robustness of the contributions. Moreover, comparisons with a standard matching technique are also presented.