In recent years, the number of robots used to perform tasks autonomously in multiple fields and sectors has gradually increased. Today, we can find robots performing repetitive tasks in controlled environments, addressing complex and sometimes dangerous tasks. However, having robots perform tasks in uncontrolled environments with the presence of objects and moving elements (such as people and other robots) and requiring the need to move between different points in the scene presents notable challenges that need to be addressed to enable greater integration of robots in such scenarios.
This research project aims to tackle activities within this scope in three specific lines: navigation, recognition, and manipulation, in order to advance the integration of robots and the performance of tasks in these environments. On one hand, it is necessary to consider the presence of humans in these social environments, as their possible movements and behavior will affect how robots should move and, ultimately, navigate within these scenarios. Additionally, there is a need to advance in the tasks of environment recognition, identifying the scenarios to make the localization of robots within them more robust and precise. Finally, the problem of object manipulation by these robots will be addressed, considering both the flexibility in shape and the deformability of these objects.
Project PROMETEO 075/2021 is funded by the Consellería de Innovación, Universidades, Ciencia y Sociedad Digital de la Generalitat Valenciana


Navigation Line
A.1 Creation of visual topological models incrementally.
A.2 Creation of hierarchical hybrid models from the information captured by an omnidirectional vision system and a 3D LiDAR.
A.3 Solving the localization problem using the created models, with an associated uncertainty, based on previous estimates and the reliability of the information captured by the sensors.
A.4 Localization of robots in outdoor environments using georeferenced images, without prior exploration, and utilizing LiDAR and cameras.
A.5 Automatic calibration of Camera-LiDAR sensors in outdoor and adverse situations.
A.6 Mission planner to determine the trajectories and actions that the mobile robotic platform with a manipulator must carry out.
A.7 Extension of the mission planner.
Recognition Line
A.8 Segmentation of images into meaningful regions in indoor and outdoor scenes.
A.9 Scene interpretation from the fusion of images and laser data.
A.10 Detection and deep learning of stable zones in images under different lighting conditions.
A.11 Estimation and modeling of object trajectories using visual and three-dimensional information.
Manipulation Line
A.12 Calculate contact points from multiple views.
A.13 Detect slippage/movement in objects in hand.
A.14 Tactile control in manipulation tasks.
Global Activities
AG.1 Experimentation and validation tests.
AG.2: Coordination, management and dissemination of results.
|