Show simple item record

dc.contributor.authorNúñez Marcos, Adrián
dc.contributor.authorAzkune Galparsoro, Gorka
dc.contributor.authorArganda Carreras, Ignacio
dc.date.accessioned2022-05-12T07:47:46Z
dc.date.available2022-05-12T07:47:46Z
dc.date.issued2022-02
dc.identifier.citationNeurocomputing 472 : 175-197 (2022)es_ES
dc.identifier.issn0925-2312
dc.identifier.issn1872-8286
dc.identifier.urihttp://hdl.handle.net/10810/56520
dc.description.abstract[EN] The egocentric action recognition EAR field has recently increased its popularity due to the affordable and lightweight wearable cameras available nowadays such as GoPro and similars. Therefore, the amount of egocentric data generated has increased, triggering the interest in the understanding of egocentric videos. More specifically, the recognition of actions in egocentric videos has gained popularity due to the challenge that it poses: the wild movement of the camera and the lack of context make it hard to recognise actions with a performance similar to that of third-person vision solutions. This has ignited the research interest on the field and, nowadays, many public datasets and competitions can be found in both the machine learning and the computer vision communities. In this survey, we aim to analyse the literature on egocentric vision methods and algorithms. For that, we propose a taxonomy to divide the literature into various categories with subcategories, contributing a more fine-grained classification of the available methods. We also provide a review of the zero-shot approaches used by the EAR community, a methodology that could help to transfer EAR algorithms to real-world applications. Finally, we summarise the datasets used by researchers in the literature.es_ES
dc.description.sponsorshipWe gratefully acknowledge the support of the Basque Govern-ment's Department of Education for the predoctoral funding of the first author. This work has been supported by the Spanish Government under the FuturAAL-Context project (RTI2018-101045-B-C21) and by the Basque Government under the Deustek project (IT-1078-16-D).es_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.relationinfo:eu-repo/grantAgreement/MICIU/RTI2018-101045-B-C21es_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/*
dc.subjectdeep learninges_ES
dc.subjectcomputer visiones_ES
dc.subjecthuman action recognitiones_ES
dc.subjectegocentric visiones_ES
dc.subjectfew-shot learninges_ES
dc.titleEgocentric Vision-based Action Recognition: A surveyes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.holder(c) 2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).es_ES
dc.rights.holderAtribución-NoComercial-SinDerivadas 3.0 España*
dc.relation.publisherversionhttps://www.sciencedirect.com/science/article/pii/S0925231221017586?via%3Dihubes_ES
dc.identifier.doi10.1016/j.neucom.2021.11.081
dc.departamentoesCiencia de la computación e inteligencia artificiales_ES
dc.departamentoeuKonputazio zientziak eta adimen artifizialaes_ES


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

(c) 2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Except where otherwise noted, this item's license is described as (c) 2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).