Show simple item record

dc.contributor.authorSun, Danyang
dc.contributor.authorDornaika, Fadi
dc.contributor.authorHoang, Vinh Truong
dc.contributor.authorBarrena Orueechebarria, Nagore
dc.date2026-09-27
dc.date.accessioned2024-12-30T19:34:26Z
dc.date.available2024-12-30T19:34:26Z
dc.date.issued2024-09-27
dc.identifier.citation2024 IEEE International Conference on Image Processing (ICIP) : 624-630 (2024)es_ES
dc.identifier.isbn979-8-3503-4939-9
dc.identifier.urihttp://hdl.handle.net/10810/71075
dc.description.abstractData augmentation can mitigate overfitting problems in data exploration without increasing the size of the model. Existing cutmix-based data augmentation has been proven to signifi- cantly enhance deep learning performance. However, many existing methods overlook the discriminative local context of the image and rely on ad hoc regions consisting of square or rectangular local regions, resulting in the loss of complete semantic object parts. In this work, we propose a superpixel- wise local-context-aware efficient image mixing approach for data augmentation, aiming to overcome the limitations previously mentioned. Our approach only requires one for- ward propagation using a superpixel attention-based label mixing with lower computational complexity. The model is trained using a combination of a global classification of the mixed (augmented) image loss, a superpixel-wise weighted local classification loss, and a superpixel-based weighted contrastive learning loss. The last two losses are based on the superpixel-aware attentive embeddings. Thus, the result- ing deep encoder can learn both local and global features of the images, capturing object-part local context information. Experiments on diverse benchmarks, such as ImageNet-1K and CUB-200-2011, indicate that the proposed method out- performs many augmentation methods for visual recognition. We have not only demonstrated its effectiveness on CNN models, but also on transformer models.es_ES
dc.description.sponsorshipUniversity of the Basque Country UPV/EHU (Spain), IKERBASQUE Basque Foundation for Science (Spain), Ho Chi Minh City open University (Vietnam)es_ES
dc.language.isoenges_ES
dc.publisherIEEEes_ES
dc.rightsinfo:eu-repo/semantics/embargoedAccesses_ES
dc.subjectData augmentation, Local context, Super- pixel, Deep visual recognitiones_ES
dc.subjectdata augmentationes_ES
dc.subjectlocal contextes_ES
dc.subjectsuper- pixeles_ES
dc.subjectdeep visual recognitiones_ES
dc.titleSuperpixel Mixing: A Data Augmentation Technique For Robust Deep Visual Recognition Modelses_ES
dc.typeinfo:eu-repo/semantics/conferenceObjectes_ES
dc.rights.holder(c) 2024 IEEEes_ES
dc.relation.publisherversionhttps://doi.org/10.1109/ICIP51287.2024.10648078es_ES
dc.identifier.doi10.1109/ICIP51287.2024.10648078
dc.departamentoesCiencia de la computación e inteligencia artificiales_ES
dc.departamentoeuKonputazio zientziak eta adimen artifizialaes_ES


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record