Show simple item record

dc.contributor.authorGutiérrez Zaballa, Jon
dc.contributor.authorBasterrechea Oyarzabal, Koldobika
dc.contributor.authorEchanove Arias, Francisco Javier ORCID
dc.contributor.authorMartínez González, María Victoria
dc.contributor.authorMartínez Corral, Unai ORCID
dc.contributor.authorMata Carballeira, Oscar ORCID
dc.contributor.authorDel Campo Hagelstrom, Inés Juliana ORCID
dc.date.accessioned2025-01-27T15:24:48Z
dc.date.available2025-01-27T15:24:48Z
dc.date.issued2023-06
dc.identifier.citationJournal of Systems Architecture 139 :(2023) // Article ID 102878es_ES
dc.identifier.issn1383-7621
dc.identifier.issn1873-6165
dc.identifier.urihttp://hdl.handle.net/10810/71898
dc.description.abstractMost of current computer vision-based advanced driver assistance systems (ADAS) perform detection and tracking of objects quite successfully under regular conditions. However, under adverse weather and changing lighting conditions, and in complex situations with many overlapping objects, these systems are not completely reliable. The spectral reflectance of the different objects in a driving scene beyond the visible spectrum can offer additional information to increase the reliability of these systems, especially under challenging driving conditions. Furthermore, this information may be significant enough to develop vision systems that allow for a better understanding and interpretation of the whole driving scene. In this work we explore the use of snapshot, video-rate hyperspectral imaging (HSI) cameras in ADAS on the assumption that the near infrared (NIR) spectral reflectance of different materials can help to better segment the objects in real driving scenarios. To do this, we have used the HSI-Drive 1.1 dataset to perform various experiments on spectral classification algorithms. However, the information retrieval of hyperspectral recordings in natural outdoor scenarios is challenging, mainly because of deficient color constancy and other inherent shortcomings of current snapshot HSI technology, which poses some limitations to the development of pure spectral classifiers. In consequence, in this work we analyze to what extent the spatial features codified by standard, tiny fully convolutional network (FCN) models can improve the performance of HSI segmentation systems for ADAS applications. In order to be realistic from an engineering viewpoint, this research is focused on the development of a feasible HSI segmentation system for ADAS, which implies considering implementation constraints and latency specifications throughout the algorithmic development process. For this reason, it is of particular importance to include the study of the raw image preprocessing stage into the data processing pipeline. Accordingly, this paper describes the development and deployment of a complete machine learning-based HSI segmentation system for ADAS, including the characterization of its performance on different embedded computing platforms, including a single board computer, an embedded GPU SoC and a programmable system on chip (PSoC) with embedded FPGA. We verify the superiority of the FPGA-PSoC over the GPU-SoC in terms of energy consumption and, particularly, processing latency, and demonstrate that it is feasible to achieve segmentation speeds within the range of ADAS industry specifications using standard development tools.es_ES
dc.description.sponsorshipBasque Government: KK-2021/00111 Basque Government: PRE_2021_1_0113 Spanish Ministry of Science and Innovation: PID2020-115375RB-I00es_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.relationinfo:eu-repo/grantAgreement/MICINN/PID2020-115375RB-I00es_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjecthyperspectral imaginges_ES
dc.subjectscene understandinges_ES
dc.subjectfully convolutional networkses_ES
dc.subjectautonomous driving systemses_ES
dc.subjectsystem on chipes_ES
dc.subjectbenchmarkses_ES
dc.titleOn-chip hyperspectral image segmentation with fully convolutional networks for scene understanding in autonomous drivinges_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.holder© 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND licensees_ES
dc.relation.publisherversionhttps://www.sciencedirect.com/science/article/pii/S1383762123000577es_ES
dc.identifier.doi10.1016/j.sysarc.2023.102878
dc.departamentoesElectricidad y electrónicaes_ES
dc.departamentoesTecnología electrónicaes_ES
dc.departamentoeuElektrizitatea eta elektronikaes_ES
dc.departamentoeuTeknologia elektronikoaes_ES


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

© 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
Except where otherwise noted, this item's license is described as © 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license