Automation, Robotics and Computer Vision Laboratory (ARVC)
  F+    F-        

José María Marín López

Photo
José M. Marín

Associate Professor and Researcher

Automation, Robotics and Computer Vision Lab.

Miguel Hernández University

Address:

Miguel Hernández University
Systems Engineering and Automation Department
INNOVA Building
Avda. Universidad s/n
03202 - Elche (Alicante) SPAIN

Tel:  +34 96 665 8882

Email: jmarin@umh.es

My Projects

Research Projects

Mobile robotics for the automatic surveillance of premises and identification of dangerous situations in challenging conditions using deep learning techniques

Title: Mobile robotics for surveillance using deep learning techniques

Funded by: Agencia Estatal de Investigación - Ministerio de Ciencia, Innovación y Universidades

Duration: Desde 01/09/2024 hasta 31/12/2027

Description: The project proposes the use of mobile robots for the automatic surveillance of premises, including indoor and outdoor areas and for the identification of risk situations in challenging operation conditions, with the common philosophy of using AI tools for the robust approach to these issues. Currently, these tasks are carried out by specialized personnel, with the help of cameras located in fixed positions. We propose to achieve much more effective surveillance through mobile robots that can patrol the areas to be monitored and use different types of sensors (visible spectrum cameras, infrared cameras, laser range sensors, etc.) to detect both risk elements and situations. It is expected that the results of this project can be relevant both for security companies and for the State security forces and bodies.

To achieve this goal, the project is structured into two main objectives. The first one consists in the development of algorithms for the navigation of mobile robots in large social environments and in challenging operation conditions. The second will address the perception and integration of technologies for interpretation of the environment and the resolution of security and surveillance tasks with mobile robots. Both objectives are fully aligned with current research trends in mobile robotics and sensory perception and the research team plans to contribute to the generation of knowledge in these areas.

In addition, the proposal also provides solutions to specific problems in the thematic priority "Digital world, industry, space and defense", since it pursues the digitalization of tasks related to improving the security of venues, infrastructures, buildings and industrial facilities. The fact of using mobile robots gives the proposal great capacity and versatility to detect risk situations. However, this requires that the methods developed within the framework of the two objectives expressly take into account the specificities of this kind of applications. In this way, the project addresses, among other tasks: (a) the creation of maps of extensive environments, indoors and outdoors, with useful semantic information for security and surveillance tasks, including the systematic exploration of a priori unknown environments, (b) the location of the robot itself (both indoors and outdoors, including adverse lighting conditions) and the alarms or risk situations detected from the fusion of sensory information, (c) the planning of trajectories for surveillance, carrying out sweeps of the assigned areas, paying special attention to critical points and boundaries of the enclosure and in collaboration with the human operator, (d) the interpretation of the environment based on sensory data, with special attention to critical elements for security tasks and to the risk situations detected, such as fire outbreaks or people falling. In addition to the development of these algorithms designed to address the specificities of this type of applications, the work plan contemplates the integration and implementation of a common graphical interface that shows the created map, the estimated position of the robot and the alarms, intrusions or elements potentially dangerous detected and that will allow interaction with the operator, so that he of she can influence the task carried out by the mobile robot.

Grant funded by Agencia Estatal de Investigación - Ministerio de Ciencia, Innovación y Universidades and by the European Union.

This website is part of the project PID2023-149575OB-I00, funded by MICIU/AEI/10.13039/501100011033 and by FEDER, UE.


Keywords: Mobile robot, visual perception, point cloud, deep learning, sensor fusion, localization, mapping, environment interpretation

Head Researcher: A. Gil, L. Payá

More Information...     


HyReBot

Title: Hybrid Robots and Multisensory Reconstruction for Applications in Lattice Structures (HyReBot)

Funded by: Ministerio de Ciencia e Innovación

Duration: 09/2021 - 08/2024

Description: The use of reticular structures, which are composed of a number of beams or bars closely intertwined, is widespread nowadays in the construction of all types of fastening and support components for different infrastructures. They are especially indicated in metal bridges but also in roofs of hangars and spacious industrial buildings. They are generally formed by a set of highly interlinked and interconnected bars, joined together by nodes (either rigid or articulated), forming a three-dimensional structural mesh. The execution of both inspection and maintenance tasks on this type of reticular structures is especially challenging owing to (a) the access problems because of the high interconnection of the bars through the nodes and (b) the complexity of going through paths that permit moving from one starting point to a target point while traversing these structural nodes.

Aerial vehicles have been considered along the past few years as a possible solution to automate these inspection and maintenance tasks on reticular three-dimensional structures. However, the high complexity of such structures (often including narrow gaps between nodes and bars and with a strongly heterogeneous distribution) limits the use of this type of aerial vehicles, since they could not enter the different internal locations of the structure that are not easily accessible. Another of the limitations of this type of vehicles is their limited manipulation capacity while they are in the air.

The present research project focuses on this field. The project will explore the possibility of using robotic units that can move along these reticular structures in such a way that they can navigate through them with 6 degrees of freedom and traverse the reticular nodes present in them, regardless of their arrangement, layout and 3D configuration of the mesh. To address these inspection and / or maintenance tasks, this research project proposes the analysis, design and implementation of hybrid robots. They will consist of simple modules with few degrees of freedom, either with serial or parallel structure, designed in such a way that, when combined into hybrid robots, they can effectively navigate through these reticular structures despite all the challenging issues they present. In addition to analyzing these robots in depth, both from the kinematic and dynamic point of view, we propose to analyze and demonstrate their ability to navigate through such reticular workspaces, negotiating any possible arrangement of reticular nodes present in such structures.

Finally, it is essential to have a sufficiently precise model of the reticular structure in which these modular robots have to operate and to estimate efficiently their position and orientation in this environment. Considering the experience of the members of the research team in previous projects, the present project also proposes performing the reconstruction of these environments (three-dimensional grid structures), based on the fusion of the information provided by both range and visual sensors in a 360o field of perception around the robot. To achieve this objective, deep learning techniques will be used to efficiently process the high amount of data provided by the sensors.


Project PID2020-116418RB-I00 funded by MCIN/AEI/10.13039/501100011033.
Agencia Estatal de Investigación

Keywords: Hybrid robots, visual perception, sensor fusion, reticular structures

Head Researcher: L. Payá, O. Reinoso

More Information...     


PROMETEO2021

Title: Towards Further Integration of Intelligent Robots in Society: Navigate, Recognize and Manipulate

Funded by: GENERALITAT VALENCIANA

Duration: 01/2021 - 12/2024

Description:

In recent years, the number of robots used to perform tasks autonomously in multiple fields and sectors has gradually increased. Today, we can find robots performing repetitive tasks in controlled environments, addressing complex and sometimes dangerous tasks. However, having robots perform tasks in uncontrolled environments with the presence of objects and moving elements (such as people and other robots) and requiring the need to move between different points in the scene presents notable challenges that need to be addressed to enable greater integration of robots in such scenarios.

This research project aims to tackle activities within this scope in three specific lines: navigation, recognition, and manipulation, in order to advance the integration of robots and the performance of tasks in these environments. On one hand, it is necessary to consider the presence of humans in these social environments, as their possible movements and behavior will affect how robots should move and, ultimately, navigate within these scenarios. Additionally, there is a need to advance in the tasks of environment recognition, identifying the scenarios to make the localization of robots within them more robust and precise. Finally, the problem of object manipulation by these robots will be addressed, considering both the flexibility in shape and the deformability of these objects.

Project PROMETEO 075/2021 is funded by the Consellería de Innovación, Universidades, Ciencia y Sociedad Digital de la Generalitat Valenciana
Valencian Innovation Agency

Head Researcher: Oscar Reinoso

More Information...     


TED2021

Title: Development of intelligent mobile technologies to address security tasks and surveillance indoors and outdoors

Funded by: Agencia Estatal de Investigación. Ministerio de Ciencia e Innovación

Duration: 12/2022 - 11/2024

Description: This project proposes using mobile robots and machine learning technologies to carry out surveillance and security tasks in indoor and outdoor environments. During the course of the project, it is expected to generate scientific knowledge and carry out technological developments that digitize and automate the tasks of surveillance of buildings, infrastructures and industry. Such developments are expected to have potential of technology transfer to security companies, State security forces and emergency units.

Currently, these tasks are carried out by specialized personnel, with the aid mainly of cameras located in fixed positions and cctv systems. In this project, it is proposed to perform this surveillance in a much more effective way and more safely for these personnel, with the support of cooperating mobile robots that can patrol the areas to be monitored and use different types of sensors (omnidirectional vision cameras, infrared cameras, laser range and proximity sensors) and sensor fusion technologies to address two major problems: (a) robot navigation through the environment to be monitored, including building a model or map, localization and trajectory planning and (b) interpretation of the environment so that suspicious objects, intrusions by unauthorized personnel and other potentially dangerous situations such as fire sources and overheating in facilities can be detected. The project includes the creation of an intuitive graphical interface that allows the user to interact with the robots and maps created, know the alarms that have been generated and influence the task carried out by the robots.

Both the cooperation between the robots themselves and the cooperation between the potential remote operator and the robots is critical to effective surveillance. It is a cutting-edge technological aspect with great development in current international research works. Other technologies involved in the project, such as object and person recognition, deep learning and autonomous robot navigation, are also among the most developed today. The proposing research group has a consolidated track record and extensive experience in the fields of mobile robotics, machine learning, image processing and sensor fusion.

Therefore, the proposed idea is framed within the field of digital transition and seeks to improve and enhance technology to apply it to security and surveillance tasks in buildings, infrastructures and facilities. The main goal of the project is to ameliorate the quality of the work of security employees and improve the competitiveness of security companies. In particular, the use of mobile robots is proposed in situations in which the use of static security cameras is inappropriate or insufficient, or to serve as support and assistance to existing security personnel. A use case will be, for example, the surveillance of large areas of land in adverse conditions (cold, extreme heat). In addition, the mobile robots will be equipped with sensors that will allow the detection of intrusions or security failures in low or no lighting conditions. The proposal also aims to have a minimum ecological impact, as it will use highly efficient electric mobile robots.


This project has been founded by Agencia Estatal de Investigación. Ministerio de Ciencia e Innovación

MICIU  Servicio  de Comunicación, Marketing y Atención al ...

Keywords: Mobile robot, computer vision, image processing, sensor fusion, robot navigation, deep learning

Head Researcher: A. Gil, L. Payá

More Information...     


RETIC

Title: Planning of robotic movements in metallic structures

Funded by: Universidad Miguel Hernández de Elche

Duration: 01/01/2021 - 31/12/2022

Description:

Nowadays, we encounter three-dimensional metallic lattice structures in numerous artificial constructions, such as stadiums, high-voltage or telecommunications towers, airports, construction sites, pipeline networks in refineries, nuclear power plants, and aerospace constructions. These structures, composed of interconnected bars forming genuine metallic networks, require periodic inspection and maintenance to preserve their good condition and functionality and to prevent their structural stability from being compromised by deterioration. Examples of the required tasks include coating the metallic bars of the structure with protective paints to prevent corrosion, non-destructive inspection to detect possible cracks and welding defects, and tightening threaded joints, among others.

Traditionally, these tasks have been performed by human operators who, equipped with safety mechanisms such as harnesses, have to climb the structure and carry out the aforementioned operations. Despite the possible safety measures that can be adopted, performing these operations is dangerous for humans, who are subjected to significant safety and health risks. In order to avoid these dangers to human operators, the possibility of performing these hazardous tasks at height using robots (autonomous or teleoperated) has been pursued over the past three decades. In this project, the objective is to plan movements that a hybrid robot can perform so that it can navigate through these structures and pass through the structural nodes, attaching itself appropriately to carry out inspection and maintenance tasks.

Head Researcher: Oscar Reinoso Garcia

More Information...     


TorqFailRob

Title: Control of parallel robots that have suffered torque failure

Funded by: Universidad Miguel Hernández, Vdo. de Investigación

Duration: 1/1/22-31/12/22

Description:

This project aims to develop control and stabilization algorithms for parallel robots that have suffered torque failure in one of their actuators. When this happens, the joint connected to the failed actuator behaves as a passive joint that can rotate freely, causing the loss of control of the robot. This is a dangerous situation since the robot can move freely without control and could collide with itself or with objects in the environment.
 
The method intended to be applied in this project is novel since it does not require brakes or redundant actuators, and consists of moving the healthy actuators of the robot to positions where the self-motion varieties vanish. Such self-motion varieties are curves or surfaces on which the robot can slide freely when its healthy actuators are blocked.

Keywords: variety of self-motion, parallel, underactuated robot

Head Researcher: Adrián Peidró

More Information...     


EMERG2020

Title: Scene Reconstruction from Omnidirectional Cameras Using Visual Appearance Techniques and Deep Learning

Funded by: Generalitat Valenciana

Duration: 01/2020 - 12/2020

Description:


Most of the existing algorithms that solve mapping and location problems stop working properly when the robot operates in an unstructured, complex and changing environment or when the robot can move with more than three degrees of freedom (DOF). ). In response to this challenge, the main research line of this project proposes the improvement and development of new mechanisms that allow efficient, robust and precise modeling of environments using vision systems. Specifically, the use of omnidirectional vision systems is proposed due to the large amount of information they provide at a relatively low cost. However, the use of these vision systems makes it necessary to consider the challenges of working with the images provided by this type of camera. In this sense, it is proposed to study in depth descriptors based on global appearance and make use of Deep Learning techniques.
The development of this project is developed through various objectives such as the analysis of the present algorithms for creating maps and location, comparison of the present global appearance algorithms and also, developing new location algorithms and / or appearance descriptors global based on Deep Learning. In order to improve the integration of the mobile robot in real work environments (Industry 4.0), in which they interact with people, characteristics that make it compatible with human perception will be incorporated into the map.

Keywords: Deep learning, scene reconstruction, localization, omnidirectional vision

Head Researcher: M. Ballesta

More Information...     


OMMNI-SLAM

Title: Map Building by Means of Appearance Visual Systems for Robot Navigation

Funded by: CICYT Ministerio de Ciencia e Innovación

Duration: 01/01/2017 al 31/12/2019

Description: In order to be truly autonomous, a mobile robot should be capable of navigating through any kind of environment while carrying out a task. In order to do that it is considered necessary that the robot possesses the ability to create a model of its workspace that allows to estimate its position inside it and navigate along a trajectory.
 
Map building and navigation is currently a very active research area, in which a large number of researchers focus on and where very different approaches have emerged based on diverse algorithms and using various kind of sensorial information. To the present days, most of the efforts have been focusing on construction of models of the environment based on a set of significant points extracted from it without considering the global appearance of the scene.
 
Considering the concepts posed above, we propose the improvement and development of new mechanisms that allow an efficient, robust and precise modelling of the environment by making use of omnidirectional vision systems. The research group has experience in the mentioned areas and during the last years has developed different approaches in the areas of map building, localization, exploration and SLAM by means of information gathered by different kind of vision systems installed on the robots. In order to carry out these approaches, an extensive study of the different description methods has been performed, both based on the extraction of significant points and local descriptors and also those methods based on the global appearance of the image, with remarkable results.

Keywords: Mobile robots, autonomous navigation, computer vision, omnidirectional systems

Head Researcher: L. Payá, O. Reinoso

More Information...     


BinaryRobot

Title: Design and development of a hybrid structure robot with binary-operated hydraulic actuators.

Funded by: Generalitat Valenciana

Duration: Del 01/01/2018 al 31/12/2019

Description:

Steel structures require inspection, maintenance, and repair tasks to ensure their proper functioning, stability, structural integrity, longevity, and aesthetic quality. Such structures are present in numerous constructions such as bridges, ports, airports, telecommunications towers, stadiums, power lines, power plants, and industrial plants, as well as forming part of the framework of most buildings. Typically, the maintenance tasks for these vertical structures are performed by human operators who must climb the structures to carry out these tasks, subjecting them to serious risks, including falling from considerable heights or electrocution. To avoid exposing human operators to such risks, for the past couple of decades, numerous researchers worldwide have been studying the possibility of using climbing robots to perform these dangerous tasks at height.

The main objective we propose in this project is to develop a new articulated climbing robot for the exploration and maintenance of vertical steel structures, with the ability to move in three-dimensional space. The main innovation of the robot to be developed in this project, compared to other climbing robots developed to date, is that the proposed robot will have binary actuation (all-or-nothing actuators), which greatly simplifies the planning and control of its movements. Additionally, the robot to be developed will have a moderately high degree of kinematic redundancy (between 10 and 12 degrees of freedom), allowing it to enjoy sufficiently high mobility to explore three-dimensional structures despite having only binary actuators. In this way, by combining binary actuation and kinematic redundancy, we aim to achieve a balance between simplicity and freedom of movement, thereby addressing the main complexity issues that currently prevent climbing robots from being used more extensively.

Keywords: climbing robot, binary operation

Head Researcher: M. Ballesta

More Information...     


NAVICOM

Title: Robotic Navigation in Dynamic Environments by means of Compact Maps with Global Appearance Visual Information

Funded by: CICYT Ministerio de Ciencia e Innovación

Duration: 01/09/2014 al 31/05/2017

Description: Carrying out a task by a team of mobile robots that move across an unknown environment is one of the open research lines with a higher scope for a large development in the mid-term. In order to accomplish this task it has been proved necessary to possess a highly detailed map of the environment that will allow the localization of the robots as they execute a particular task. During the last years the proposer research team has worked with remarkable results in the field of SLAM (Simultaneous Localization and Mapping) with teams of mobile robots. The work has considered the use of robots equipped with cameras and the inclusion of the visual information gathered in order to build map models. So far, different kind of maps have been built, including metric maps based on visual landmarks, as well as topological maps base on global appearance-based information extracted from images.
These maps have allowed the navigation of the robots in these maps as well as the performance of high level tasks in the environment. Nonetheless, there exists space for improvement in several areas related to the research carried out so far. Currently, one of the important problems consists in the treatment of the visual information and the updating of this information as the environment changes gradually. In addition, the maps should be created considering the dynamic and static part of the environment (for example when other mobile robots or people move in the environment), thus leading to the creation of more realistic models, as well as strategies to update the maps as changes are detected. A different research line considers the creation of maps that combine simultaneously the information about the topology of the environment, as well as semantic and metric information that will allow a more effective localization of the robot in large environments and, in addition, will enable a hierarchical localization in these maps. The proposed research project considers to tackle the aforementioned lines, thus considering the task of developing dynamic visual maps that will incorporate the semantic and topological structure of the environment, as well as the metric information when the robots perform trajectories with 6 degrees of freedom.

Keywords: Mobile Robots, Visual Maps, Topological and Compact Navigation, Visual SLAM

Head Researcher: A. Gil, O. Reinoso

More Information...     


VISTOPMAP

Title: Topological mapping using the global appearance of a set of images

Funded by: Generalitat Valenciana

Duration: 01/01/2015 - 31/12/2016

Description:

When a mobile robot performs a task within a given environment, it needs to have some knowledge of that environment to effectively carry out the task. Generally, the environments in which robots operate are unstructured, complex, and changing. Thus, it is crucial to create models of these environments based on the information and observations captured by the robots within them to ensure effective localization. This is the focus of the project proposal.

The main objective we propose is to solve the problem of creating maps of an unknown environment, using the information provided by a vision system installed on the robot exploring the environment. The traditional approach to solving such problems involves extracting local features from scenes and creating metric maps in which the robot's position can be estimated relative to a global reference system, with an associated error. In contrast to this approach, we propose using the global appearance information of scenes to create topological maps, which contain information about the locations that make up the environment and the connectivity relationships between them. These are more recent and computationally efficient alternatives that, however, require in-depth study in tasks of creating robust maps of extensive and dynamically characteristic environments.

Keywords: Omnidirectional vision, global appearance, topological map, hierarchical localization

Head Researcher: L. Payá

More Information...     


VISCOBOT II

Title: Integrated Exploration of Enviroments by means of Cooperative Robots in order to build 3D Visual and Topological Maps intended for 6 DOF Navigation

Funded by: CICYT Ministerio de Ciencia e Innovación

Duration: 01/01/2011 al 31/12/2013

Description: While a group of mobile robots carry out a task, they need to find their location within the environment. In consequence a precise map of a general and undetermined environment has to be known by the robots. During the last decade a series of methods have been developed that allow the construction of the map by a mobile robot. These algorithms consider the case in which the vehicle moves along the environment, constructs the map while, simultaneously, computes its location within the map. As a result, this problem has been named Simultaneous Localization and Mapping (SLAM). This research project focusses thus on the construction of visual maps in 3D general unknown environments by using a team of mobile robots equipped with vision sensors. In this sense, we propose to undertake, among others, the following lines: 6 DOF cooperative visual SLAM, in which the robots move following general trajectories in the environment (with 6 degrees of freedom) instead of the classical trajectories in which it is assumed that the robots navigates on a two-dimensional plane; integrated exploration, where the exploration paths of the robots consider to maximize the knowledge of the environment and, at the same time, take into account the uncertainty in the maps created by the robot(s); map alginment and map fusion of local maps created by different robots; and finally, the creation of maps using the information based in the visual appearance that allows the construction of high-level topological maps.

Keywords: Robotics, Visual SLAM, Cooperative Exploration

Head Researcher: O. Reinoso

More Information...     


Technical Assistance

RemoteRoboticsLab

Title: Convenio de Colaboración para el desarrollo del proyecto "Hacia la formación práctica ubicua y digital en robótica mediante laboratorios remotos”

Funded by: Centro de Inteligencia Digital de la Provincia de Alicante (CENID)

Duration: 6 meses (abril 2022 - octubre 2022)

Description: Este proyecto pretende desarrollar un laboratorio remoto, que consiste en una plataforma ciberfísica que permite al estudiantado de carreras técnicas conectarse a robots de forma remota, para hacer prácticas de laboratorio y experimentos con dichos robots, a través de Internet. Esto permitirá dotar al estudiantado de mayor flexibilidad espacial y temporal, permitiéndoles hacer prácticas de laboratorio de forma ubicua, sin limitarlos a tener que desplazarse a un laboratorio físico para realizar las prácticas únicamente en las horas en las que el acceso a dicho laboratorio está habilitado. El estudiantado se conectará a los robots reales a través de un servidor web y, a través de una interfaz, podrá comandar movimientos o experimentos para realizar con los robots remotos. El movimiento de los robots se mostrará a través de una webcam en tiempo real, y también se devolverá información relativa a los resultados del experimento remoto, información que será captada mediante sensores de posición, velocidad, y fuerza, colocados en el robot real. Los robots remotos que se implementarán para hacer prácticas a distancia serán de tipo paralelo o de cadena cinemática cerrada, ya que éstos disponen de mayor riqueza que los robots tradicionales de cadena cinemática serie, a la hora de ser estudiados en asignaturas de control y robótica.

Las actividades que abarcará este proyecto serán las siguientes cuatro: 1) construcción de dos robots paralelos con los que el estudiantado pueda realizar prácticas y experimentos a través de internet, 2) implementación del servidor web que gestione las reservas y el acceso remoto de los robots por parte del estudiantado, 3) la programación de interfaces gráficas de usuario que permitan al estudiantado comandar órdenes y experimentos a la vez que se observa el movimiento del robot en tiempo real a través de una webcam, y 4) diseño de prácticas y experimentos didácticos a realizar con la ayuda del laboratorio remoto desarrollado. El principal resultado esperado de este proyecto es la materialización del mencionado laboratorio remoto, que permitirá flexibilizar la realización ubicua de prácticas con robots reales a distancia, haciendo uso de las tecnologías digitales al servicio de la enseñanza y el aprendizaje.

Keywords: Robot paralelo, laboratorio remoto, prácticas de laboratorio, identificación, control

Head Researcher: Adrián Peidró

More Information...     


Tonalidad de Pieles

Title: Desarrollo de un software para la detección y medición de los diferentes tonos de piel

Funded by: PIES CUADRADOS LEATHER S.L.

Duration: 2019 - 2020

Description: El objetivo de esta propuesta es el estudio, desarrollo e implementación de un sistema de clasificación mediante visión por computador de la tonalidad de piezas de cuero teñido atendiendo a la apariencia visual del tono de la piel.
 
La medida de color y clasificación de grandes piezas de cuero teñido para obtener una producción uniforme en la industria del calzado, es un problema técnico no resuelto debido a las dificultades impuestas por la alta variabilidad espacial de la tonalidad y de  la textura de la misma. En la percepción visual de la tonalidad de un producto influyen múltiples factores: iluminación, propiedades de absorción del material y la respuesta del sensor utilizado. Cada uno de estos factores está sujeto a variación tanto espacial como temporal. Los trabajos en el campo de la percepción de color  han dado lugar a una conjunto de modelos y herramientas para definir de forma univoca el color puntual respecto a unas referencias espectrales, pero la medida y percepción de diferencias de tonalidad  en materiales no uniformes, aplicados a la producción de elementos con múltiples piezas donde la compatibilidad de la apariencia visual  es determinante, es un problema mucho más complejo y sujeto todavía a estudio.
 
Sobre esta base, se propone el estudio y desarrollo de los diferentes aspectos y tecnologías aplicados a la clasificación de la tonalidad de pieles. Se propone la necesidad del estudio e implementación de descriptores de color uniformes  que permitan la medida de distancia de apariencia visual de forma robusta. El estudio de técnicas de calibración y corrección de color que permita observar las variaciones espaciales y temporales del sistema de captación y de iluminación. Adicionalmente se propone el estudio de descriptores texturales aplicables a imágenes en color y que tengan en cuenta no solo la tonalidad puntual sino la variabilidad espacial  del mismo debido a la textura del material. Se diseñarán y analizarán técnicas de clasificación y reconocimiento de patrones que permitan establecer reglas de decisión robustas. Por último se implementarán los resultados obtenidos en un sistema industrial de clasificación de pieles mediante visión por computador.

Head Researcher: O. Reinoso

More Information...     


IXION1

Title: Contrato para la realización de los trabajos de desarrollo experimental que forman parte del Proyecto presentado al Plan Avanza2 de título "iCOPILOT Asistente inteligente a la conducción"

Funded by: IXION INDUSTRY AND AEROSPACE, S.L.

Duration: 2014

Head Researcher: O. Reinoso


IXION2

Title: Contrato para la realización de los trabajos de desarrollo experimental que forman parte del proyecto presentado al Plan Avanza2 de título "SUPVERT Vehículo Autónomo Aéreo para Inspección de estructuras Verticales"

Funded by: IXION INDUSTRY AND AEROSPACE S.L.

Duration: 2014

Head Researcher: O. Reinoso


Essay A&CN (I)

Title: Development of an acquisition system for impact absorption and deformation essays covering normative UNE 4158 IN

Funded by: Automatica & Control Numérico (A&CN)

Duration: 1/2007 - 1/2008

Head Researcher: O. Reinoso


Essay A&CN (II)

Title: Analysis and deployment of a Mechanical System or Sporting Pavement covering normative UNE 4158 IN

Funded by: Automática & Control Numérico (A&CN)

Duration: 4/2007 - 1/2008

Head Researcher: O. Reinoso



  © Automation, Robotics and Computer Vision Lab. (ARVC) - UMH