Show simple item record

dc.contributor.authorPicón Ruiz, Artzai ORCID
dc.contributor.authorMedela Ayarzaguena, Alfonso
dc.contributor.authorSánchez Peralta, Luisa F.
dc.contributor.authorCicchi, Riccardo
dc.contributor.authorBilbao, Roberto
dc.contributor.authorAlfieri, Domenico
dc.contributor.authorElola Artano, Andoni
dc.contributor.authorGlover, Ben
dc.contributor.authorLópez Saratxaga, Cristina
dc.date.accessioned2021-04-09T11:50:50Z
dc.date.available2021-04-09T11:50:50Z
dc.date.issued2021-02-22
dc.identifier.citationIEEE Access 9 : 32081-32093 (2021)es_ES
dc.identifier.issn2169-3536
dc.identifier.urihttp://hdl.handle.net/10810/50875
dc.description.abstractModern photonic technologies are emerging, allowing the acquisition of in-vivo endoscopic tissue imaging at a microscopic scale, with characteristics comparable to traditional histological slides, and with a label-free modality. This raises the possibility of an 'optical biopsy' to aid clinical decision making. This approach faces barriers for being incorporated into clinical practice, including the lack of existing images for training, unfamiliarity of clinicians with the novel image domains and the uncertainty of trusting 'black-box' machine learned image analysis, where the decision making remains inscrutable. In this paper, we propose a new method to transform images from novel photonics techniques (e.g. autofluorescence microscopy) into already established domains such as Hematoxilyn-Eosin (H-E) microscopy through virtual reconstruction and staining. We introduce three main innovations: 1) we propose a transformation method based on a Siamese structure that simultaneously learns the direct and inverse transformation ensuring domain back-transformation quality of the transformed data. 2) We also introduced an embedding loss term that ensures similarity not only at pixel level, but also at the image embedding description level. This drastically reduces the perception distortion trade-off problem existing in common domain transfer based on generative adversarial networks. These virtually stained images can serve as reference standard images for comparison with the already known H-E images. 3) We also incorporate an uncertainty margin concept that allows the network to measure its own confidence, and demonstrate that these reconstructed and virtually stained images can be used on previously-studied classification models of H-E images that have been computationally degraded and de-stained. The three proposed methods can be seamlessly incorporated on any existing architectures. We obtained balanced accuracies of 0.95 and negative predictive values of 1.00 over the reconstructed and virtually stained image-set on the detection of color-rectal tumoral tissue. This is of great importance as we reduce the need for extensive labeled datasets for training, which are normally not available on the early studies of a new imaging technologyes_ES
dc.description.sponsorshipThis work was supported in part by the European Union's Horizon 2020 Research and Innovation Programme under Grant 732111 (PICCOLO project), and in part by the Basque Government's Industry Department through the ELKARTEK Program's Project 3KIA under Grant KK-2020/00049. The work of Andoni Elola was supported by his pre-doctoral research from the Basque Government under Grant PRE_2019_2_0100 and Grant IT1229-19es_ES
dc.language.isoenges_ES
dc.publisherIEEE-Institute of Electrical and Electronics Engineerses_ES
dc.relationinfo:eu-repo/grantAgreement/EC/H2020/732111es_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/*
dc.subjectoptical imaginges_ES
dc.subjectbiomedical optical imaginges_ES
dc.subjectimage reconstructiones_ES
dc.subjectmicroscopyes_ES
dc.subjectbiopsyes_ES
dc.subjectbiological system modelinges_ES
dc.subjectoptical sensorses_ES
dc.subjecthistopathology analysises_ES
dc.subjectconvolutional neural networkes_ES
dc.subjectdomain adaptationes_ES
dc.subjectoptical biopsyes_ES
dc.subjectvirtual staininges_ES
dc.subjectsiamese semantic regression networkses_ES
dc.titleAutofluorescence Image Reconstruction and Virtual Staining for In-Vivo Optical Biopsyinges_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.holderThis work is licensed under a Creative Commons Attribution 4.0 License (CC BY 4.0)es_ES
dc.rights.holderAtribución 3.0 España*
dc.relation.publisherversionhttps://ieeexplore.ieee.org/document/9359782es_ES
dc.identifier.doi10.1109/ACCESS.2021.3060926
dc.contributor.funderEuropean Commission
dc.departamentoesIngeniería de sistemas y automáticaes_ES
dc.departamentoesMatemáticases_ES
dc.departamentoeuMatematikaes_ES
dc.departamentoeuSistemen ingeniaritza eta automatikaes_ES


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

This work is licensed under a Creative Commons Attribution 4.0 License (CC BY 4.0)
Except where otherwise noted, this item's license is described as This work is licensed under a Creative Commons Attribution 4.0 License (CC BY 4.0)