Automation, Robotics and Computer Vision Laboratory (ARVC)
  F+    F-        

David Valiente García

Photo
David Valiente

Assistant Professor & Researcher

Automation, Robotics and Computer Vision Lab.

Miguel Hernández University

Address:

Miguel Hernández University
Systems Engineering and Automation Department
INNOVA Building
Avda. Universidad s/n
03202 - Elche (Alicante) SPAIN


Email: dvaliente@umh.es

My Projects

Research Projects

AViRobots

Title: Development of an intelligent surveillance and security infrastructure system based on mobile robots

Funded by: AVI (Agència Valenciana de la Innovació)

Duration: 01/2023 - 12/2025

Description: The project focuses on the use of terrestrial mobile robots for the surveillance of indoor and outdoor environments, access control and people identification. It is proposed the realization of technological developments that digitize and automate the tasks of surveillance of buildings and infrastructures by means of mobile robots aided by artificial intelligence techniques. The project considers the development of a complete surveillance system that will integrate: a set of intelligent mobile robots equipped with sensors, a human-machine interface software system that will allow efficient interaction between operators and robots and, finally, a wireless communications system that will allow the exchange of information in the system.

The developed system can be exploited by security companies for the surveillance of indoor or outdoor environments or by law enforcement agencies. During the course of the project, a demonstration system will be created to validate this application and make it ready for a level close to the market. In this way, the aim is to reduce uncertainties about the technical and commercial viability of this technology. The demonstrators will make it possible to test the operation of the monitoring system under real operating conditions and will also make it possible to present the product to companies interested in its commercial exploitation.

This project with reference INNVA1/2023/61 has been funded by the Valencian Innovation Agency.
Valencian Innovation Agency

Keywords: Mobile robots, visual perception, multisensory fusion, infrastructure surveillance

Head Researcher: Arturo Gil, Luis Payá

More Information...     


HYREBOT

Title: Hybrid Robots and Multisensory Reconstruction for Applications in Lattice Structures (HyReBot)

Funded by: Ministerio de Ciencia e Innovación

Duration: 09/2021 - 08/2024

Description: The use of reticular structures, which are composed of a number of beams or bars closely intertwined, is widespread nowadays in the construction of all types of fastening and support components for different infrastructures. They are especially indicated in metal bridges but also in roofs of hangars and spacious industrial buildings. They are generally formed by a set of highly interlinked and interconnected bars, joined together by nodes (either rigid or articulated), forming a three-dimensional structural mesh. The execution of both inspection and maintenance tasks on this type of reticular structures is especially challenging owing to (a) the access problems because of the high interconnection of the bars through the nodes and (b) the complexity of going through paths that permit moving from one starting point to a target point while traversing these structural nodes.

Aerial vehicles have been considered along the past few years as a possible solution to automate these inspection and maintenance tasks on reticular three-dimensional structures. However, the high complexity of such structures (often including narrow gaps between nodes and bars and with a strongly heterogeneous distribution) limits the use of this type of aerial vehicles, since they could not enter the different internal locations of the structure that are not easily accessible. Another of the limitations of this type of vehicles is their limited manipulation capacity while they are in the air.

The present research project focuses on this field. The project will explore the possibility of using robotic units that can move along these reticular structures in such a way that they can navigate through them with 6 degrees of freedom and traverse the reticular nodes present in them, regardless of their arrangement, layout and 3D configuration of the mesh. To address these inspection and / or maintenance tasks, this research project proposes the analysis, design and implementation of hybrid robots. They will consist of simple modules with few degrees of freedom, either with serial or parallel structure, designed in such a way that, when combined into hybrid robots, they can effectively navigate through these reticular structures despite all the challenging issues they present. In addition to analyzing these robots in depth, both from the kinematic and dynamic point of view, we propose to analyze and demonstrate their ability to navigate through such reticular workspaces, negotiating any possible arrangement of reticular nodes present in such structures.

Finally, it is essential to have a sufficiently precise model of the reticular structure in which these modular robots have to operate and to estimate efficiently their position and orientation in this environment. Considering the experience of the members of the research team in previous projects, the present project also proposes performing the reconstruction of these environments (three-dimensional grid structures), based on the fusion of the information provided by both range and visual sensors in a 360o field of perception around the robot. To achieve this objective, deep learning techniques will be used to efficiently process the high amount of data provided by the sensors.


Project PID2020-116418RB-I00 funded by MCIN/AEI/10.13039/501100011033.
Agencia Estatal de Investigación

Keywords: Hybrid robots, visual perception, sensor fusion, reticular structures

Head Researcher: L. Payá, O. Reinoso

More Information...     


TED2021

Title: Development of intelligent mobile technologies to address security tasks and surveillance indoors and outdoors

Funded by: Agencia Estatal de Investigación. Ministerio de Ciencia e Innovación

Duration: 12/2022 - 11/2024

Description: This project proposes using mobile robots and machine learning technologies to carry out surveillance and security tasks in indoor and outdoor environments. During the course of the project, it is expected to generate scientific knowledge and carry out technological developments that digitize and automate the tasks of surveillance of buildings, infrastructures and industry. Such developments are expected to have potential of technology transfer to security companies, State security forces and emergency units.

Currently, these tasks are carried out by specialized personnel, with the aid mainly of cameras located in fixed positions and cctv systems. In this project, it is proposed to perform this surveillance in a much more effective way and more safely for these personnel, with the support of cooperating mobile robots that can patrol the areas to be monitored and use different types of sensors (omnidirectional vision cameras, infrared cameras, laser range and proximity sensors) and sensor fusion technologies to address two major problems: (a) robot navigation through the environment to be monitored, including building a model or map, localization and trajectory planning and (b) interpretation of the environment so that suspicious objects, intrusions by unauthorized personnel and other potentially dangerous situations such as fire sources and overheating in facilities can be detected. The project includes the creation of an intuitive graphical interface that allows the user to interact with the robots and maps created, know the alarms that have been generated and influence the task carried out by the robots.

Both the cooperation between the robots themselves and the cooperation between the potential remote operator and the robots is critical to effective surveillance. It is a cutting-edge technological aspect with great development in current international research works. Other technologies involved in the project, such as object and person recognition, deep learning and autonomous robot navigation, are also among the most developed today. The proposing research group has a consolidated track record and extensive experience in the fields of mobile robotics, machine learning, image processing and sensor fusion.

Therefore, the proposed idea is framed within the field of digital transition and seeks to improve and enhance technology to apply it to security and surveillance tasks in buildings, infrastructures and facilities. The main goal of the project is to ameliorate the quality of the work of security employees and improve the competitiveness of security companies. In particular, the use of mobile robots is proposed in situations in which the use of static security cameras is inappropriate or insufficient, or to serve as support and assistance to existing security personnel. A use case will be, for example, the surveillance of large areas of land in adverse conditions (cold, extreme heat). In addition, the mobile robots will be equipped with sensors that will allow the detection of intrusions or security failures in low or no lighting conditions. The proposal also aims to have a minimum ecological impact, as it will use highly efficient electric mobile robots.


This project has been founded by Agencia Estatal de Investigación. Ministerio de Ciencia e Innovación
Agencia Estatal de Investigación

Keywords: Mobile robot, computer vision, image processing, sensor fusion, robot navigation, deep learning

Head Researcher: A. Gil, L. Payá

More Information...     


ModRet

Title: Reconocimiento y creación de modelos de estructuras reticulares (ModRet)

Funded by: Universidad Miguel Hernández de Elche

Duration: 2 años

Description: El proyecto se centra en la creación de modelos de estructuras reticulares. Este tipo de estructuras se encuentran en numerosas construcciones y requieren un mantenimiento continuado. Este mantenimiento se puede automatizar mediante un robot móvil que sea capaz de desplazarse a través de la estructura. Sin embargo, para poder abordar esta tarea, es necesario que el robot disponga de un modelo de la estructura, que le permita conocer su posición y planificar la trayectoria y secuencia de movimientos adecuadas para alcanzar el punto destino. Para crear este modelo, el robot recogerá información a medida que se desplaza a lo largo de la estructura por primera vez, mediante los sensores de que va equipado (fundamentalmente sistemas de visión omnidireccional). El modelado de este tipo de estructuras presenta varios aspectos diferenciales con respecto a otros entornos, como su simetría y presencia de estructuras visuales repetitivas, la gran variedad de puntos de vista desde la que pueden ser observadas, dependiendo de la trayectoria del robot, y los cambios que puede sufrir su apariencia, debido a las reparaciones desarrolladas por el robot. Considerando estas características, se dotará al modelo de una estructura jerárquica, con una capa de alto nivel con información sobre la topología de la estructura, y una o varias capas de bajo nivel, con datos sobre las barras y nodos, como su forma, ancho, planos que componen las barras y topología de nodos. Para la descripción de las escenas y extracción de información relevante se usarán técnicas de inteligencia artificial y aprendizaje profundo. Con estas herramientas se separará la información que rodea la estructura y sus condiciones (como las condiciones de iluminación) de la información de la celosía que rodea al robot (barras y nodos). Asimismo, se implementarán algoritmos para la creación del modelo del entorno de manera incremental, actualizándolo según el robot avanza y captura nueva información de la estructura.

Head Researcher: L. Payá

More Information...     


NAVICOM

Title: Robotic Navigation in Dynamic Environments by means of Compact Maps with Global Appearance Visual Information

Funded by: CICYT Ministerio de Ciencia e Innovación

Duration: 01/09/2014 al 31/05/2017

Description: Carrying out a task by a team of mobile robots that move across an unknown environment is one of the open research lines with a higher scope for a large development in the mid-term. In order to accomplish this task it has been proved necessary to possess a highly detailed map of the environment that will allow the localization of the robots as they execute a particular task. During the last years the proposer research team has worked with remarkable results in the field of SLAM (Simultaneous Localization and Mapping) with teams of mobile robots. The work has considered the use of robots equipped with cameras and the inclusion of the visual information gathered in order to build map models. So far, different kind of maps have been built, including metric maps based on visual landmarks, as well as topological maps base on global appearance-based information extracted from images.
These maps have allowed the navigation of the robots in these maps as well as the performance of high level tasks in the environment. Nonetheless, there exists space for improvement in several areas related to the research carried out so far. Currently, one of the important problems consists in the treatment of the visual information and the updating of this information as the environment changes gradually. In addition, the maps should be created considering the dynamic and static part of the environment (for example when other mobile robots or people move in the environment), thus leading to the creation of more realistic models, as well as strategies to update the maps as changes are detected. A different research line considers the creation of maps that combine simultaneously the information about the topology of the environment, as well as semantic and metric information that will allow a more effective localization of the robot in large environments and, in addition, will enable a hierarchical localization in these maps. The proposed research project considers to tackle the aforementioned lines, thus considering the task of developing dynamic visual maps that will incorporate the semantic and topological structure of the environment, as well as the metric information when the robots perform trajectories with 6 degrees of freedom.

Keywords: Mobile Robots, Visual Maps, Topological and Compact Navigation, Visual SLAM

Head Researcher: A. Gil, O. Reinoso

More Information...     


Technical Assistance

Hacia la formación práctica ubicua y digital en robótica mediante laboratorios remotos

Title: Convenio de Colaboración para el desarrollo del proyecto "Hacia la formación práctica ubicua y digital en robótica mediante laboratorios remotos”

Funded by: Centro de Inteligencia Digital de la Provincia de Alicante (CENID)

Duration: 6 meses (abril 2022 - octubre 2022)

Description: Este proyecto pretende desarrollar un laboratorio remoto, que consiste en una plataforma ciberfísica que permite al estudiantado de carreras técnicas conectarse a robots de forma remota, para hacer prácticas de laboratorio y experimentos con dichos robots, a través de Internet. Esto permitirá dotar al estudiantado de mayor flexibilidad espacial y temporal, permitiéndoles hacer prácticas de laboratorio de forma ubicua, sin limitarlos a tener que desplazarse a un laboratorio físico para realizar las prácticas únicamente en las horas en las que el acceso a dicho laboratorio está habilitado. El estudiantado se conectará a los robots reales a través de un servidor web y, a través de una interfaz, podrá comandar movimientos o experimentos para realizar con los robots remotos. El movimiento de los robots se mostrará a través de una webcam en tiempo real, y también se devolverá información relativa a los resultados del experimento remoto, información que será captada mediante sensores de posición, velocidad, y fuerza, colocados en el robot real. Los robots remotos que se implementarán para hacer prácticas a distancia serán de tipo paralelo o de cadena cinemática cerrada, ya que éstos disponen de mayor riqueza que los robots tradicionales de cadena cinemática serie, a la hora de ser estudiados en asignaturas de control y robótica.

Las actividades que abarcará este proyecto serán las siguientes cuatro: 1) construcción de dos robots paralelos con los que el estudiantado pueda realizar prácticas y experimentos a través de internet, 2) implementación del servidor web que gestione las reservas y el acceso remoto de los robots por parte del estudiantado, 3) la programación de interfaces gráficas de usuario que permitan al estudiantado comandar órdenes y experimentos a la vez que se observa el movimiento del robot en tiempo real a través de una webcam, y 4) diseño de prácticas y experimentos didácticos a realizar con la ayuda del laboratorio remoto desarrollado. El principal resultado esperado de este proyecto es la materialización del mencionado laboratorio remoto, que permitirá flexibilizar la realización ubicua de prácticas con robots reales a distancia, haciendo uso de las tecnologías digitales al servicio de la enseñanza y el aprendizaje.

Keywords: Robot paralelo, laboratorio remoto, prácticas de laboratorio, identificación, control

Head Researcher: Adrián Peidró

More Information...     


abionica1.21T

Title: Empleo de algoritmos para conciencia situacional en vuelo mediante visión artificial

Funded by: Abionica Solutions S.L.

Duration: 05/2021 - 11/2021

Head Researcher: A. Gil

More Information...     


IXION2

Title: Contrato para la realización de los trabajos de desarrollo experimental que forman parte del proyecto presentado al Plan Avanza2 de título "SUPVERT Vehículo Autónomo Aéreo para Inspección de estructuras Verticales"

Funded by: IXION INDUSTRY AND AEROSPACE S.L.

Duration: 2014

Head Researcher: O. Reinoso

More Information...     



  © Automation, Robotics and Computer Vision Lab. (ARVC) - UMH