Show simple item record

dc.contributor.authorAranjuelo Ansa, Nerea
dc.contributor.authorGarcía Castaño, Jorge
dc.contributor.authorUnzueta Irurtia, Luis
dc.contributor.authorGarcía Torres, Sara
dc.contributor.authorElordi Hidalgo, Unai
dc.contributor.authorOtaegui Madurga, Oihana
dc.date.accessioned2021-08-13T08:08:59Z
dc.date.available2021-08-13T08:08:59Z
dc.date.issued2021
dc.identifier.citationIn Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) 5 : 80-91 (2021)es_ES
dc.identifier.isbn978-989-758-488-6
dc.identifier.issn2184-4321
dc.identifier.urihttp://hdl.handle.net/10810/52866
dc.description.abstract[EN] Synthetic simulated environments are gaining popularity in the Deep Learning Era, as they can alleviate the effort and cost of two critical tasks to build multi-camera systems for surveillance applications: setting up the camera system to cover the use cases and generating the labeled dataset to train the required Deep Neural Networks (DNNs). However, there are no simulated environments ready to solve them for all kind of scenarios and use cases. Typically, ‘ad hoc’ environments are built, which cannot be easily applied to other contexts. In this work we present a methodology to build synthetic simulated environments with sufficient generality to be usable in different contexts, with little effort. Our methodology tackles the challenges of the appropriate parameterization of scene configurations, the strategies to generate randomly a wide and balanced range of situations of interest for training DNNs with synthetic data, and the quick image capturing from virtual cameras considering the rendering bottlenecks. We show a practical implementation example for the detection of incorrectly placed luggage in aircraft cabins, including the qualitative and quantitative analysis of the data generation process and its influence in a DNN training, and the required modifications to adapt it to other surveillance contexts.es_ES
dc.description.sponsorshipThis work has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation program under grant agreement No. 865162, SmaCS (https://www.smacs.eu/)es_ES
dc.language.isoenges_ES
dc.publisherSciTePress, Science and Technology Publications, Ldaes_ES
dc.relationinfo:eu-repo/grantAgreement/EC/H2020/865162es_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/*
dc.subjectsimulated environmentses_ES
dc.subjectsynthetic dataes_ES
dc.subjectdeep neural networkses_ES
dc.subjectobject detectiones_ES
dc.subjectvideo surveillancees_ES
dc.titleBuilding synthetic simulated environments for configuring and training multi-camera systems for surveillance applicationses_ES
dc.typeinfo:eu-repo/semantics/conferenceObjectes_ES
dc.rights.holder©2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved. CC BY-NC-ND 4.0es_ES
dc.rights.holderAtribución-NoComercial-SinDerivadas 3.0 España*
dc.relation.publisherversionhttps://www.scitepress.org/PublicationsDetail.aspx?ID=Pr3XxXfcWd8=&t=1es_ES
dc.identifier.doi10.5220/0010232400800091
dc.contributor.funderEuropean Commission
dc.departamentoesLenguajes y sistemas informáticoses_ES
dc.departamentoeuHizkuntza eta sistema informatikoakes_ES


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

©2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved. CC BY-NC-ND 4.0
Except where otherwise noted, this item's license is described as ©2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved. CC BY-NC-ND 4.0