Show simple item record

dc.contributor.authorCantero, David ORCID
dc.contributor.authorEsnaola-Gonzalez, Iker ORCID
dc.contributor.authorMiguel Alonso, José
dc.contributor.authorJauregi Iztueta, Ekaitz
dc.date.accessioned2022-08-01T11:26:13Z
dc.date.available2022-08-01T11:26:13Z
dc.date.issued2022
dc.identifier.citationSensors 22(11) : (2022) // Article ID 4205es_ES
dc.identifier.issn1424-8220
dc.identifier.urihttp://hdl.handle.net/10810/57103
dc.description.abstractObject detection is an essential capability for performing complex tasks in robotic applications. Today, deep learning (DL) approaches are the basis of state-of-the-art solutions in computer vision, where they provide very high accuracy albeit with high computational costs. Due to the physical limitations of robotic platforms, embedded devices are not as powerful as desktop computers, and adjustments have to be made to deep learning models before transferring them to robotic applications. This work benchmarks deep learning object detection models in embedded devices. Furthermore, some hardware selection guidelines are included, together with a description of the most relevant features of the two boards selected for this benchmark. Embedded electronic devices integrate a powerful AI co-processor to accelerate DL applications. To take advantage of these co-processors, models must be converted to a specific embedded runtime format. Five quantization levels applied to a collection of DL models are considered; two of them allow the execution of models in the embedded general-purpose CPU and are used as the baseline to assess the improvements obtained when running the same models with the three remaining quantization levels in the AI co-processors. The benchmark procedure is explained in detail, and a comprehensive analysis of the collected data is presented. Finally, the feasibility and challenges of the implementation of embedded object detection applications are discussed.es_ES
dc.description.sponsorshipThis work has received support from the following programs: PID2019-104966GB-I00 (Spanish Ministry of Science and Innovation), IT-1244-19 (Basque Government), KK-2020/00049, KK-2021/00111 and KK-2021/00095 (Elkartek projects 3KIA, ERTZEAN and SIGZE, funded by the SPRI-Basque Government) and the AI-PROFICIENT project funded by European Union’s Horizon 2020 research and innovation program under grant agreement no. 95739es_ES
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.relationinfo:eu-repo/grantAgreement/EC/H2020/957391es_ES
dc.relationinfo:eu-repo/grantAgreement/MICINN/PID2019-104966GB-I00es_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subjectobject detectiones_ES
dc.subjectembedded deviceses_ES
dc.subjectdeep learninges_ES
dc.subjectbenchmarkinges_ES
dc.titleBenchmarking Object Detection Deep Learning Models in Embedded Deviceses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.date.updated2022-06-09T13:40:55Z
dc.rights.holder© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).es_ES
dc.relation.publisherversionhttps://www.mdpi.com/1424-8220/22/11/4205es_ES
dc.identifier.doi10.3390/s22114205
dc.contributor.funderEuropean Commission
dc.departamentoesLenguajes y sistemas informáticos
dc.departamentoesArquitectura y Tecnología de Computadores
dc.departamentoeuLengoaia eta Sistema Informatikoak
dc.departamentoeuKonputagailuen Arkitektura eta Teknologia


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

© 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Except where otherwise noted, this item's license is described as © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).