登录    注册    忘记密码

详细信息

YOLO-CSMD: Integrating Improved Convolutional Techniques for Manhole Cover Defect Detection  ( SCI-EXPANDED收录 EI收录)  

文献类型:期刊文献

英文题名:YOLO-CSMD: Integrating Improved Convolutional Techniques for Manhole Cover Defect Detection

作者:Xu, Zhiwang[1];Luo, Haowei[2];Zhu, Huijie[3];Sun, Wanfa[2];Yang, Shengying[2]

机构:[1]Shaoxing Univ, Yuanpei Coll, Shaoxing 312000, Zhejiang, Peoples R China;[2]Zhejiang Univ Sci & Technol, Sch Informat & Elect Engn, Hangzhou 310023, Peoples R China;[3]Huzhou Zhongke Fanzai Elect Power Technol Dev Co Ltd, Huzhou 313000, Peoples R China

年份:2025

卷号:13

起止页码:103155

外文期刊名:IEEE ACCESS

收录:SCI-EXPANDED(收录号:WOS:001512606800008)、、EI(收录号:20252518622607)、Scopus(收录号:2-s2.0-105008099012)、WOS

基金:This work was supported in part by the 2025 Zhejiang Provincial Philosophy and Social Sciences Planning Project under Grant 25NDJC115YB, and in part by the 2023 Regular Project of Zhejiang Provincial Research Center for Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era under Grant 23CCG44.

语种:英文

外文关键词:Feature extraction; Computational modeling; YOLO; Convolution; Tensors; Neck; Mathematical models; Detectors; Transforms; Defect detection; Manhole cover defect; object detection; attention mechanism; state space model

外文摘要:Recent detection algorithms for manhole cover defects exhibit limited detection capabilities, which frequently encounter the issue of missed detection. To address this issue, this paper proposes an improved detection algorithm based on YOLOv8, called YOLO-CSMD. First, Convolution State Space Module (ConvSSM) was introduced into the Neck network to improve our model's sequence modeling capabilities. Second, the Spatial Texture Transform Module (STTM) is proposed to improve the ability of texture processing and feature fusion through the lightweight convolution and efficient multiscale attention module. Finally, the experimental results of YOLO-CSMD, using YOLOv8 as the baseline on our self-build dataset which including 5 defect classes and 10,115 images show that the mAP@50 of our model increased from 78.1% to 84.8%, and the mAP@50-95 increased from 61.3% to 67.5%. This demonstrates the effectiveness of our method.

参考文献:

正在载入数据...

版权所有©绍兴文理学院 重庆维普资讯有限公司 渝B2-20050021-8
渝公网安备 50019002500408号 违法和不良信息举报中心