跳至主導覽 跳至搜尋 跳過主要內容

Synthetizing SWI from 3T to 7T by generative diffusion network for deep medullary veins visualization

  • Sui Li
  • , Xingguang Deng
  • , Qiwei Li
  • , Zhiming Zhen
  • , Luyi Han
  • , Kang Chen
  • , Chaoyang Zhou
  • , Fengxi Chen
  • , Peiyu Huang
  • , Ruiting Zhang
  • , Hao Chen
  • , Tianyu Zhang
  • , Wei Chen
  • , Tao Tan
  • , Chen Liu

研究成果: Article同行評審

摘要

Ultrahigh-field susceptibility-weighted imaging (SWI) provides excellent tissue contrast and anatomical details of brain. However, ultrahigh-field magnetic resonance (MR) scanner often expensive and provides uncomfortable noise experience for patient. Therefore, some deep learning approaches have been proposed to synthesis high-field MR images from low-filed MR images, most existing methods rely on generative adversarial network (GAN) and achieve acceptable results. While the dilemma in train process of GAN, generally recognized, limits the synthesis performance in SWI images for its microvascular structure. Diffusion models, as a promising alternative, indirectly characterize the gaussian noise to the target image with a slow sampling through a considerable number of steps. To address this limitation, we presented a generative diffusion-based deep learning imaging model, named conditional denoising diffusion probabilistic model (CDDPM), for synthesizing high-field (7 Tesla) SWI images form low-field (3 Tesla) SWI images and assess clinical applicability. Crucially, the experiment results demonstrate that the diffusion-based model that synthesizes 7T SWI from 3T SWI images is potentially to providing an alternative way to achieve the advantages of ultra-high field 7T MR images for deep medullary veins visualization.

原文English
文章編號121475
期刊NeuroImage
320
DOIs
出版狀態Published - 15 10月 2025

指紋

深入研究「Synthetizing SWI from 3T to 7T by generative diffusion network for deep medullary veins visualization」主題。共同形成了獨特的指紋。

引用此