Show simple item record

dc.contributor.authorHuang, Jiahao
dc.contributor.authorDing, Weiping
dc.contributor.authorLv, Jun
dc.contributor.authorYang, Jingwen
dc.contributor.authorDong, Hao
dc.contributor.authorDel Ser Lorente, Javier ORCID
dc.contributor.authorXia, Jun
dc.contributor.authorRen, Tiaojuan
dc.contributor.authorWong, Stephen T.
dc.contributor.authorYang, Guang
dc.date.accessioned2023-02-09T17:17:08Z
dc.date.available2023-02-09T17:17:08Z
dc.date.issued2022-10
dc.identifier.citationApplied Intelligence 52(13) : 14693-14710 (2022)es_ES
dc.identifier.issn0924-669X
dc.identifier.issn1573-7497
dc.identifier.urihttp://hdl.handle.net/10810/59739
dc.description.abstractIn clinical medicine, magnetic resonance imaging (MRI) is one of the most important tools for diagnosis, triage, prognosis, and treatment planning. However, MRI suffers from an inherent slow data acquisition process because data is collected sequentially in k-space. In recent years, most MRI reconstruction methods proposed in the literature focus on holistic image reconstruction rather than enhancing the edge information. This work steps aside this general trend by elaborating on the enhancement of edge information. Specifically, we introduce a novel parallel imaging coupled dual discriminator generative adversarial network (PIDD-GAN) for fast multi-channel MRI reconstruction by incorporating multi-view information. The dual discriminator design aims to improve the edge information in MRI reconstruction. One discriminator is used for holistic image reconstruction, whereas the other one is responsible for enhancing edge information. An improved U-Net with local and global residual learning is proposed for the generator. Frequency channel attention blocks (FCA Blocks) are embedded in the generator for incorporating attention mechanisms. Content loss is introduced to train the generator for better reconstruction quality. We performed comprehensive experiments on Calgary-Campinas public brain MR dataset and compared our method with state-of-the-art MRI reconstruction methods. Ablation studies of residual learning were conducted on the MICCAI13 dataset to validate the proposed modules. Results show that our PIDD-GAN provides high-quality reconstructed MR images, with well-preserved edge information. The time of single-image reconstruction is below 5ms, which meets the demand of faster processing.es_ES
dc.description.sponsorshipThis work was supported in part by the Zhejiang Shuren University Basic Scientific Research Special Funds, in part by the European Research Council Innovative Medicines Initiative (DRAGON, H2020-JTI-IMI2 101005122), in part by the AI for Health Imaging Award (CHAIMELEON, H2020-SC1-FA-DTS-2019-1 952172), in part by the UK Research and Innovation Future Leaders Fellowship (MR/V023799/1), in part by the British Heart Foundation (Project Number: TG/18/5/34111, PG/16/78/32402), in part by the Foundation of Peking University School and Hospital of Stomatology [KUSSNT-19B11], in part by the Peking University Health Science Center Youth Science and Technology Innovation Cultivation Fund [BMU2021PYB017], in part by the National Natural Science Foundation of China [61976120], in part by the Natural Science Foundation of Jiangsu Province [BK20191445], in part by the Qing Lan Project of Jiangsu Province, in part by National Natural Science Foundation of China [61902338], in part by the Project of Shenzhen International Cooperation Foundation [GJHZ20180926165402083], in part by the Basque Government through the ELKARTEK funding program [KK-2020/00049], and in part by the consolidated research group MATHMODE [IT1294-19].es_ES
dc.language.isoenges_ES
dc.publisherSpringeres_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/*
dc.subjectfast MRIes_ES
dc.subjectparallel imaginges_ES
dc.subjectmulti-view learninges_ES
dc.subjectgenerative adversarial networkses_ES
dc.subjectedge enhancementes_ES
dc.titleEdge-enhanced dual discriminator generative adversarial network for fast MRI with parallel imaging using multi-view informationes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.holder© The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.es_ES
dc.rights.holderAtribución 3.0 España*
dc.relation.publisherversionhttps://link.springer.com/article/10.1007/s10489-021-03092-wes_ES
dc.identifier.doi10.1007/s10489-021-03092-w
dc.departamentoesIngeniería de comunicacioneses_ES
dc.departamentoeuKomunikazioen ingeniaritzaes_ES


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

© The Author(s) 2021. This article is licensed under a Creative Commons
Attribution 4.0 International License, which permits use, sharing,
adaptation, distribution and reproduction in any medium or format,
as long as you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons licence, and indicate
if changes were made. The images or other third party material in this
article are included in the article's Creative Commons licence, unless
indicated otherwise in a credit line to the material. If material is not
included in the article's Creative Commons licence and your intended
use is not permitted by statutory regulation or exceeds the permitted
use, you will need to obtain permission directly from the copyright
holder. To view a copy of this licence, visit http://creativecommons.
org/licenses/by/4.0/.
Except where otherwise noted, this item's license is described as © The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.