Abstract:
To address the issues of high computational resource demands and low accuracy in existing defect detection algorithms for LED bulb appearance defect detection, an RSME-YOLO algorithm wasd designed. First, a reconstructed lightweight RD-HGNet backbone was proposed to reduce computational redundancy while enhancing gradient stability. Second, a Slim-Neck module was adopted to minimize redundant computations while preserving cross-channel interactions, efficiently maintaining multi-scale feature fusion. Then, a MSA-Detect detection head incorporating multi-head self-attention mechanism was designed to enhance feature interaction capability and model expressiveness. Finally, by replacing the loss function with the decoupled EIoU loss that separately handles length and width, the negative impact of unbalanced optimization on the model was resolved. Compared to the baseline YOLOv8n algorithm, RSME-YOLO reduced the parameter count by 33.2%, lowered GFLOPs by 40.7%, and had a model size of only 4.2 M. It achieved mAP50 scores of 87.1% and 86.8% on the validation and test sets, respectively. With higher detection accuracy and lightweight design, it is more suitable for intelligent upgrades in LED defect detection for small and medium-sized enterprises.