详细信息
AttmNet: a hybrid Transformer integrating self-attention, Mamba, and multi-layer convolution for enhanced lesion segmentation ( SCI-EXPANDED收录)
文献类型:期刊文献
英文题名:AttmNet: a hybrid Transformer integrating self-attention, Mamba, and multi-layer convolution for enhanced lesion segmentation
作者:Zhu, Hancan[1];Huang, Yibing[2];Yao, Kelin[1];Shang, Jinxiang[1];Hu, Keli[1];Li, Zhong[3];He, Guanghua[2]
机构:[1]Shaoxing Univ, Affiliated Hosp, Interdisciplinary Res Ctr, Shaoxing, Peoples R China;[2]Shaoxing Univ, Sch Math Phys & Informat, 900 Chengnan Rd, Shaoxing 312000, Peoples R China;[3]Huzhou Univ, Sch Informat & Engn, Huzhou, Peoples R China
年份:2025
卷号:15
期号:5
起止页码:4296
外文期刊名:QUANTITATIVE IMAGING IN MEDICINE AND SURGERY
收录:SCI-EXPANDED(收录号:WOS:001493212100037)、、Scopus(收录号:2-s2.0-105004006723)、WOS
基金:Funding: This work was supported by Humanities and Social Science Fund of Ministry of Education of China (No. 23YJAZH232 to H.Z.) and Zhejiang Provincial Natural Science Foundation of China (No. LZ24F020006 to K.H.) .
语种:英文
外文关键词:Lesion segmentation; Mamba; Transformer; self-attention; convolutional neural networks (CNNs)
外文摘要:Background: Accurate lesion segmentation is critical for cancer diagnosis and treatment. Convolutional neural networks (CNNs) are widely used for medical image segmentation but struggle to capture longrange dependencies. Transformers mitigate this limitation but come with high computational costs. Mamba, a state-space model (SSM), efficiently models long-range dependencies but lacks precision in fine details. To address these challenges, this study aimed to develop a novel segmentation approach that combines the strengths of CNNs, Transformers, and Mamba, enhancing both global context understanding and local feature extraction in medical image segmentation. Methods: We propose AttmNet, a U-shaped network designed for medical image segmentation, which incorporates a novel structure called MAM (Multiscale-Convolution, Self-Attention, and Mamba). The MAM block integrates multi-layer convolution for multi-scale feature learning with an Att-Mamba component that combines self-attention and Mamba to effectively capture global context while preserving fine details. We evaluated AttmNet on four public datasets for breast, skin, and lung lesion segmentation. Results: AttmNet outperformed state-of-the-art methods in terms of intersection over union (IoU) and Dice similarity coefficients. On the breast ultrasound (BUS) dataset, AttmNet achieved a 3.38% improvement in IoU and a 4.54% increase in Dice over the next best method. On the breast ultrasound images (BUSI) dataset, AttmNet's IoU and Dice coefficients were 1.17% and 3.21% higher than the closest competitor, respectively. In the PH2 Dermoscopy Image dataset, AttmNet surpassed the next best model by 0.25% in both IoU and Dice. On the larger coronavirus disease 2019 (COVID-19) Lung dataset, AttmNet maintained strong performance, achieving higher IoU and Dice scores than the next best models, SegMamba and TransUNet. Conclusions: AttmNet is a powerful and efficient tool for medical image segmentation, addressing the limitations of existing methods through its advanced design. The MAM block significantly enhances segmentation accuracy while maintaining computational efficiency, making AttmNet highly suitable for clinical applications. The code is available at https://github.com/hyb2840/AttmNet.
参考文献:
正在载入数据...