Research Projects
Mobile robotics for the automatic surveillance of premises and identification of dangerous situations in challenging conditions using deep learning techniques
Title: Mobile robotics for surveillance using deep learning techniques Funded by: Agencia Estatal de Investigación - Ministerio de Ciencia, Innovación y Universidades
Duration: Desde 01/09/2024 hasta 31/12/2027
Description: The project proposes the use of mobile robots for the automatic surveillance of premises, including indoor and outdoor areas and for the identification of risk situations in challenging operation conditions, with the common philosophy of using AI tools for the robust approach to these issues. Currently, these tasks are carried out by specialized personnel, with the help of cameras located in fixed positions. We propose to achieve much more effective surveillance through mobile robots that can patrol the areas to be monitored and use different types of sensors (visible spectrum cameras, infrared cameras, laser range sensors, etc.) to detect both risk elements and situations. It is expected that the results of this project can be relevant both for security companies and for the State security forces and bodies.
To achieve this goal, the project is structured into two main objectives. The first one consists in the development of algorithms for the navigation of mobile robots in large social environments and in challenging operation conditions. The second will address the perception and integration of technologies for interpretation of the environment and the resolution of security and surveillance tasks with mobile robots. Both objectives are fully aligned with current research trends in mobile robotics and sensory perception and the research team plans to contribute to the generation of knowledge in these areas.
In addition, the proposal also provides solutions to specific problems in the thematic priority "Digital world, industry, space and defense", since it pursues the digitalization of tasks related to improving the security of venues, infrastructures, buildings and industrial facilities. The fact of using mobile robots gives the proposal great capacity and versatility to detect risk situations. However, this requires that the methods developed within the framework of the two objectives expressly take into account the specificities of this kind of applications. In this way, the project addresses, among other tasks: (a) the creation of maps of extensive environments, indoors and outdoors, with useful semantic information for security and surveillance tasks, including the systematic exploration of a priori unknown environments, (b) the location of the robot itself (both indoors and outdoors, including adverse lighting conditions) and the alarms or risk situations detected from the fusion of sensory information, (c) the planning of trajectories for surveillance, carrying out sweeps of the assigned areas, paying special attention to critical points and boundaries of the enclosure and in collaboration with the human operator, (d) the interpretation of the environment based on sensory data, with special attention to critical elements for security tasks and to the risk situations detected, such as fire outbreaks or people falling. In addition to the development of these algorithms designed to address the specificities of this type of applications, the work plan contemplates the integration and implementation of a common graphical interface that shows the created map, the estimated position of the robot and the alarms, intrusions or elements potentially dangerous detected and that will allow interaction with the operator, so that he of she can influence the task carried out by the mobile robot.
Grant funded by Agencia Estatal de Investigación - Ministerio de Ciencia, Innovación y Universidades and by the European Union.
This website is part of the project PID2023-149575OB-I00, funded by MICIU/AEI/10.13039/501100011033 and by FEDER, UE.
Keywords: Mobile robot, visual perception, point cloud, deep learning, sensor fusion, localization, mapping, environment interpretation
Head Researcher: A. Gil, L. Payá More Information...
AViRobots
Title: Development of an intelligent surveillance and security infrastructure system based on mobile robots Funded by: AVI (Agència Valenciana de la Innovació)
Duration: 01/2023 - 12/2025
Description: The project focuses on the use of terrestrial mobile robots for the surveillance of indoor and outdoor environments, access control and people identification. It is proposed the realization of technological developments that digitize and automate the tasks of surveillance of buildings and infrastructures by means of mobile robots aided by artificial intelligence techniques. The project considers the development of a complete surveillance system that will integrate: a set of intelligent mobile robots equipped with sensors, a human-machine interface software system that will allow efficient interaction between operators and robots and, finally, a wireless communications system that will allow the exchange of information in the system.
The developed system can be exploited by security companies for the surveillance of indoor or outdoor environments or by law enforcement agencies. During the course of the project, a demonstration system will be created to validate this application and make it ready for a level close to the market. In this way, the aim is to reduce uncertainties about the technical and commercial viability of this technology. The demonstrators will make it possible to test the operation of the monitoring system under real operating conditions and will also make it possible to present the product to companies interested in its commercial exploitation.
This project with reference INNVA1/2023/61 has been funded by the Valencian Innovation Agency.
Keywords: Mobile robots, visual perception, multisensory fusion, infrastructure surveillance
Head Researcher: Arturo Gil, Luis Payá More Information...
HyReBot
Title: Hybrid Robots and Multisensory Reconstruction for Applications in Lattice Structures (HyReBot) Funded by: Ministerio de Ciencia e Innovación
Duration: 09/2021 - 08/2024
Description: The use of reticular structures, which are composed of a number of beams or bars closely intertwined, is widespread nowadays in the construction of all types of fastening and support components for different infrastructures. They are especially indicated in metal bridges but also in roofs of hangars and spacious industrial buildings. They are generally formed by a set of highly interlinked and interconnected bars, joined together by nodes (either rigid or articulated), forming a three-dimensional structural mesh. The execution of both inspection and maintenance tasks on this type of reticular structures is especially challenging owing to (a) the access problems because of the high interconnection of the bars through the nodes and (b) the complexity of going through paths that permit moving from one starting point to a target point while traversing these structural nodes.
Aerial vehicles have been considered along the past few years as a possible solution to automate these inspection and maintenance tasks on reticular three-dimensional structures. However, the high complexity of such structures (often including narrow gaps between nodes and bars and with a strongly heterogeneous distribution) limits the use of this type of aerial vehicles, since they could not enter the different internal locations of the structure that are not easily accessible. Another of the limitations of this type of vehicles is their limited manipulation capacity while they are in the air.
The present research project focuses on this field. The project will explore the possibility of using robotic units that can move along these reticular structures in such a way that they can navigate through them with 6 degrees of freedom and traverse the reticular nodes present in them, regardless of their arrangement, layout and 3D configuration of the mesh. To address these inspection and / or maintenance tasks, this research project proposes the analysis, design and implementation of hybrid robots. They will consist of simple modules with few degrees of freedom, either with serial or parallel structure, designed in such a way that, when combined into hybrid robots, they can effectively navigate through these reticular structures despite all the challenging issues they present. In addition to analyzing these robots in depth, both from the kinematic and dynamic point of view, we propose to analyze and demonstrate their ability to navigate through such reticular workspaces, negotiating any possible arrangement of reticular nodes present in such structures.
Finally, it is essential to have a sufficiently precise model of the reticular structure in which these modular robots have to operate and to estimate efficiently their position and orientation in this environment. Considering the experience of the members of the research team in previous projects, the present project also proposes performing the reconstruction of these environments (three-dimensional grid structures), based on the fusion of the information provided by both range and visual sensors in a 360o field of perception around the robot. To achieve this objective, deep learning techniques will be used to efficiently process the high amount of data provided by the sensors.
Project PID2020-116418RB-I00 funded by MCIN/AEI/10.13039/501100011033.
Keywords: Hybrid robots, visual perception, sensor fusion, reticular structures
Head Researcher: L. Payá, O. Reinoso More Information...
PROMETEO2021
Title: Towards Further Integration of Intelligent Robots in Society: Navigate, Recognize and Manipulate Funded by: GENERALITAT VALENCIANA
Duration: 01/2021 - 12/2024
Description:
In recent years, the number of robots used to perform tasks autonomously in multiple fields and sectors has gradually increased. Today, we can find robots performing repetitive tasks in controlled environments, addressing complex and sometimes dangerous tasks. However, having robots perform tasks in uncontrolled environments with the presence of objects and moving elements (such as people and other robots) and requiring the need to move between different points in the scene presents notable challenges that need to be addressed to enable greater integration of robots in such scenarios.
This research project aims to tackle activities within this scope in three specific lines: navigation, recognition, and manipulation, in order to advance the integration of robots and the performance of tasks in these environments. On one hand, it is necessary to consider the presence of humans in these social environments, as their possible movements and behavior will affect how robots should move and, ultimately, navigate within these scenarios. Additionally, there is a need to advance in the tasks of environment recognition, identifying the scenarios to make the localization of robots within them more robust and precise. Finally, the problem of object manipulation by these robots will be addressed, considering both the flexibility in shape and the deformability of these objects.
Project PROMETEO 075/2021 is funded by the Consellería de Innovación, Universidades, Ciencia y Sociedad Digital de la Generalitat Valenciana
Head Researcher: Oscar Reinoso More Information...
TED2021
Title: Development of intelligent mobile technologies to address security tasks and surveillance indoors and outdoors Funded by: Agencia Estatal de Investigación. Ministerio de Ciencia e Innovación
Duration: 12/2022 - 11/2024
Description: This project proposes using mobile robots and machine learning technologies to carry out surveillance and security tasks in indoor and outdoor environments. During the course of the project, it is expected to generate scientific knowledge and carry out technological developments that digitize and automate the tasks of surveillance of buildings, infrastructures and industry. Such developments are expected to have potential of technology transfer to security companies, State security forces and emergency units.
Currently, these tasks are carried out by specialized personnel, with the aid mainly of cameras located in fixed positions and cctv systems. In this project, it is proposed to perform this surveillance in a much more effective way and more safely for these personnel, with the support of cooperating mobile robots that can patrol the areas to be monitored and use different types of sensors (omnidirectional vision cameras, infrared cameras, laser range and proximity sensors) and sensor fusion technologies to address two major problems: (a) robot navigation through the environment to be monitored, including building a model or map, localization and trajectory planning and (b) interpretation of the environment so that suspicious objects, intrusions by unauthorized personnel and other potentially dangerous situations such as fire sources and overheating in facilities can be detected. The project includes the creation of an intuitive graphical interface that allows the user to interact with the robots and maps created, know the alarms that have been generated and influence the task carried out by the robots.
Both the cooperation between the robots themselves and the cooperation between the potential remote operator and the robots is critical to effective surveillance. It is a cutting-edge technological aspect with great development in current international research works. Other technologies involved in the project, such as object and person recognition, deep learning and autonomous robot navigation, are also among the most developed today. The proposing research group has a consolidated track record and extensive experience in the fields of mobile robotics, machine learning, image processing and sensor fusion.
Therefore, the proposed idea is framed within the field of digital transition and seeks to improve and enhance technology to apply it to security and surveillance tasks in buildings, infrastructures and facilities. The main goal of the project is to ameliorate the quality of the work of security employees and improve the competitiveness of security companies. In particular, the use of mobile robots is proposed in situations in which the use of static security cameras is inappropriate or insufficient, or to serve as support and assistance to existing security personnel. A use case will be, for example, the surveillance of large areas of land in adverse conditions (cold, extreme heat). In addition, the mobile robots will be equipped with sensors that will allow the detection of intrusions or security failures in low or no lighting conditions. The proposal also aims to have a minimum ecological impact, as it will use highly efficient electric mobile robots.
This project has been founded by Agencia Estatal de Investigación. Ministerio de Ciencia e Innovación
Keywords: Mobile robot, computer vision, image processing, sensor fusion, robot navigation, deep learning
Head Researcher: A. Gil, L. Payá More Information...
IA-GAMMAPATIA
Title: Análisis inicial de herramientas de IA para la predicción de la malignización de las gammapatías de significado incierto a mieloma múltiple u otras patologías linfoproliferativas Funded by: Generalitat Valenciana
Duration: 01/2023 - 12/2023
Description: Existe el riesgo de progresión de pacientes con gammapatía monoclonal de significado incierto a Mieloma Múltiple. Aunque se conocen clasificaciones basadas en el riesgo de evolución a cáncer, hay que realizar controles médicos de por vida para detectar la evolución hacia la malignización de las gammapatías. Se explorarán los datos existentes y se realizará un análisis inicial del funcionamiento de diversas herramientas de IA, para establecer la capacidad de predicción de cada una de ellas.
Proyecto financiado por la Consellería de Innovación, Universidades, Ciencia y Sociedad Digital de la Generalitat Valenciana
Head Researcher: L. Payá More Information...
AICO2019
Title: Hierarchical model creation and robust localization of mobile robots in social environments Funded by: Generalitat Valenciana
Duration: 01/01/2019 a 31/12/2020
Description: The project focuses on the field of map construction and localization using omnidirectional vision, advancing towards a hybrid topological-metric paradigm, which allows (a) the incremental construction of a semantic map as the robot explores the unknown environment and (b) the estimation of the robot's position and orientation with precision, with 6 degrees of freedom and at a reasonable computational cost. Additionally, to improve the integration of the mobile robot in real social environments, where it must interact with people, some features will be included in the model to make it compatible with human perception.
Thus, the proposal aims to go beyond the concept of multi-level hierarchical localization, adapting it to extensive and complex social environments, and including collaboration with users through high-level commands. This proposal is organized around two main lines of research:
- Line A: Incremental creation of hybrid metric-topological maps from the global appearan ce of a set of scenes.
- Line B: Construction of environment models that allow localization with 6 degrees of freedom from visual information.
Keywords: Mobile robot; omnidirectional vision; hybrid map; hierarchical localization; social environments
Head Researcher: L. Payá More Information...
OMMNI-SLAM
Title: Map Building by Means of Appearance Visual Systems for Robot Navigation Funded by: CICYT Ministerio de Ciencia e Innovación
Duration: 01/01/2017 al 31/12/2019
Description: In order to be truly autonomous, a mobile robot should be capable of navigating through any kind of environment while carrying out a task. In order to do that it is considered necessary that the robot possesses the ability to create a model of its workspace that allows to estimate its position inside it and navigate along a trajectory.
Map building and navigation is currently a very active research area, in which a large number of researchers focus on and where very different approaches have emerged based on diverse algorithms and using various kind of sensorial information. To the present days, most of the efforts have been focusing on construction of models of the environment based on a set of significant points extracted from it without considering the global appearance of the scene.
Considering the concepts posed above, we propose the improvement and development of new mechanisms that allow an efficient, robust and precise modelling of the environment by making use of omnidirectional vision systems. The research group has experience in the mentioned areas and during the last years has developed different approaches in the areas of map building, localization, exploration and SLAM by means of information gathered by different kind of vision systems installed on the robots. In order to carry out these approaches, an extensive study of the different description methods has been performed, both based on the extraction of significant points and local descriptors and also those methods based on the global appearance of the image, with remarkable results.
Keywords: Mobile robots, autonomous navigation, computer vision, omnidirectional systems
Head Researcher: L. Payá, O. Reinoso More Information...
NAVICOM
Title: Robotic Navigation in Dynamic Environments by means of Compact Maps with Global Appearance Visual Information Funded by: CICYT Ministerio de Ciencia e Innovación
Duration: 01/09/2014 al 31/05/2017
Description: Carrying out a task by a team of mobile robots that move across an unknown environment is one of the open research lines with a higher scope for a large development in the mid-term. In order to accomplish this task it has been proved necessary to possess a highly detailed map of the environment that will allow the localization of the robots as they execute a particular task. During the last years the proposer research team has worked with remarkable results in the field of SLAM (Simultaneous Localization and Mapping) with teams of mobile robots. The work has considered the use of robots equipped with cameras and the inclusion of the visual information gathered in order to build map models. So far, different kind of maps have been built, including metric maps based on visual landmarks, as well as topological maps base on global appearance-based information extracted from images.
These maps have allowed the navigation of the robots in these maps as well as the performance of high level tasks in the environment. Nonetheless, there exists space for improvement in several areas related to the research carried out so far. Currently, one of the important problems consists in the treatment of the visual information and the updating of this information as the environment changes gradually. In addition, the maps should be created considering the dynamic and static part of the environment (for example when other mobile robots or people move in the environment), thus leading to the creation of more realistic models, as well as strategies to update the maps as changes are detected. A different research line considers the creation of maps that combine simultaneously the information about the topology of the environment, as well as semantic and metric information that will allow a more effective localization of the robot in large environments and, in addition, will enable a hierarchical localization in these maps. The proposed research project considers to tackle the aforementioned lines, thus considering the task of developing dynamic visual maps that will incorporate the semantic and topological structure of the environment, as well as the metric information when the robots perform trajectories with 6 degrees of freedom.
Keywords: Mobile Robots, Visual Maps, Topological and Compact Navigation, Visual SLAM
Head Researcher: A. Gil, O. Reinoso More Information...
VISCOBOT II
Title: Integrated Exploration of Enviroments by means of Cooperative Robots in order to build 3D Visual and Topological Maps intended for 6 DOF Navigation Funded by: CICYT Ministerio de Ciencia e Innovación
Duration: 01/01/2011 al 31/12/2013
Description: While a group of mobile robots carry out a task, they need to find their location within the environment. In consequence a precise map of a general and undetermined environment has to be known by the robots. During the last decade a series of methods have been developed that allow the construction of the map by a mobile robot. These algorithms consider the case in which the vehicle moves along the environment, constructs the map while, simultaneously, computes its location within the map. As a result, this problem has been named Simultaneous Localization and Mapping (SLAM). This research project focusses thus on the construction of visual maps in 3D general unknown environments by using a team of mobile robots equipped with vision sensors. In this sense, we propose to undertake, among others, the following lines: 6 DOF cooperative visual SLAM, in which the robots move following general trajectories in the environment (with 6 degrees of freedom) instead of the classical trajectories in which it is assumed that the robots navigates on a two-dimensional plane; integrated exploration, where the exploration paths of the robots consider to maximize the knowledge of the environment and, at the same time, take into account the uncertainty in the maps created by the robot(s); map alginment and map fusion of local maps created by different robots; and finally, the creation of maps using the information based in the visual appearance that allows the construction of high-level topological maps.
Keywords: Robotics, Visual SLAM, Cooperative Exploration
Head Researcher: O. Reinoso More Information...
VISCOBOT: Percepction
Title: Cooperative Mobile Visual Perception Systems as support for tasks performed by means Robot Networks Funded by: CICYT Ministerio de Ciencia e Innovación
Duration: 1/10/2007 - 30/09/2010
Description: Performing tasks in a coordinated manner by means of a team of robots is a topic of great interest and allows to improve the results compared to the single-robot case. The current research project focuses on this particular field and proposes the need to use different vision systems distributed along the mobile agent network that gather a precise and complete description of the environment. To cope with the proposed goals it will be necessary to tackle with different research lines, in consequence, we are working on the following subjects: Cooperative map building and localization using particle filters, Visual landmark modelling: Improving data association in visual SLAM, development of cooperative exploration strategies using the information provided by each robot, cooperative reconstruction of environments using appearance based methods.
Keywords: Robots Cooperativos, SLAM visual, Exploración cooperativa, Reconstrucción basada en apariencia
Head Researcher: O. Reinoso More Information...
Aidico-DISCO
Title: Study and analysis of disc-based cutting of natural stone by means monitoring the process Funded by: Generalitat Valenciana
Duration: 1/1/2004 - 31/12/2005
Keywords: Stone Cutting
Head Researcher: O. Reinoso
Technical Assistance
RemoteRoboticsLab
Title: Convenio de Colaboración para el desarrollo del proyecto "Hacia la formación práctica ubicua y digital en robótica mediante laboratorios remotos” Funded by: Centro de Inteligencia Digital de la Provincia de Alicante (CENID)
Duration: 6 meses (abril 2022 - octubre 2022)
Description: Este proyecto pretende desarrollar un laboratorio remoto, que consiste en una plataforma ciberfísica que permite al estudiantado de carreras técnicas conectarse a robots de forma remota, para hacer prácticas de laboratorio y experimentos con dichos robots, a través de Internet. Esto permitirá dotar al estudiantado de mayor flexibilidad espacial y temporal, permitiéndoles hacer prácticas de laboratorio de forma ubicua, sin limitarlos a tener que desplazarse a un laboratorio físico para realizar las prácticas únicamente en las horas en las que el acceso a dicho laboratorio está habilitado. El estudiantado se conectará a los robots reales a través de un servidor web y, a través de una interfaz, podrá comandar movimientos o experimentos para realizar con los robots remotos. El movimiento de los robots se mostrará a través de una webcam en tiempo real, y también se devolverá información relativa a los resultados del experimento remoto, información que será captada mediante sensores de posición, velocidad, y fuerza, colocados en el robot real. Los robots remotos que se implementarán para hacer prácticas a distancia serán de tipo paralelo o de cadena cinemática cerrada, ya que éstos disponen de mayor riqueza que los robots tradicionales de cadena cinemática serie, a la hora de ser estudiados en asignaturas de control y robótica.
Las actividades que abarcará este proyecto serán las siguientes cuatro: 1) construcción de dos robots paralelos con los que el estudiantado pueda realizar prácticas y experimentos a través de internet, 2) implementación del servidor web que gestione las reservas y el acceso remoto de los robots por parte del estudiantado, 3) la programación de interfaces gráficas de usuario que permitan al estudiantado comandar órdenes y experimentos a la vez que se observa el movimiento del robot en tiempo real a través de una webcam, y 4) diseño de prácticas y experimentos didácticos a realizar con la ayuda del laboratorio remoto desarrollado. El principal resultado esperado de este proyecto es la materialización del mencionado laboratorio remoto, que permitirá flexibilizar la realización ubicua de prácticas con robots reales a distancia, haciendo uso de las tecnologías digitales al servicio de la enseñanza y el aprendizaje.
Keywords: Robot paralelo, laboratorio remoto, prácticas de laboratorio, identificación, control
Head Researcher: Adrián Peidró More Information...
abionica1.20T
Title: Desarrollo de algoritmos de detección y seguimiento de marcas visuales artificiales para la navegación de drones en tareas de inspección de grandes terrenos Funded by: ABIONICA SOLUTIONS S.L.
Duration: 11/2020 - 04/2021
Head Researcher: A. Gil
abionica1.21T
Title: Empleo de algoritmos para conciencia situacional en vuelo mediante visión artificial Funded by: Abionica Solutions S.L.
Duration: 05/2021 - 11/2021
Head Researcher: A. Gil
QBot
Title: Contrato de desarrollo de software Funded by: Q-BOT LIMITED
Duration: 2016
Head Researcher: O. Reinoso
IXION1
Title: Contrato para la realización de los trabajos de desarrollo experimental que forman parte del Proyecto presentado al Plan Avanza2 de título "iCOPILOT Asistente inteligente a la conducción" Funded by: IXION INDUSTRY AND AEROSPACE, S.L.
Duration: 2014
Head Researcher: O. Reinoso
IXION2
Title: Contrato para la realización de los trabajos de desarrollo experimental que forman parte del proyecto presentado al Plan Avanza2 de título "SUPVERT Vehículo Autónomo Aéreo para Inspección de estructuras Verticales" Funded by: IXION INDUSTRY AND AEROSPACE S.L.
Duration: 2014
Head Researcher: O. Reinoso
Essay A&CN (I)
Title: Development of an acquisition system for impact absorption and deformation essays covering normative UNE 4158 IN Funded by: Automatica & Control Numérico (A&CN)
Duration: 1/2007 - 1/2008
Head Researcher: O. Reinoso
Technical Assistance
Integración de Robots Móviles en Fitur 2009
Title: Asesoramiento en la integración de robots móviles en turismo Funded by: Turisme d'Elx (Ayuntamiento de Elche)
Duration: 2009
Description: Prestación de servicios para tareas de asesoramiento en la integración de robots móviles en la feria de turismo FITUR 2009
Keywords: fitur, robot, turismo, elche
Head Researcher: D. Ubeda
|