[1]

Huang Z, Li L, Krizek GC, Sun L. 2023. Research on traffic sign detection based on improved YOLOv8. Journal of Computer and Communications 11:226−32

doi: 10.4236/jcc.2023.117014
[2]

Zheng T, Huang Y, Liu Y, Tang W, Yang Z, et al. 2022. CLRNet: cross layer refinement network for lane detection. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, LA, USA, 18-24 June 2022. USA: IEEE. pp. 888−97. doi: 10.1109/cvpr52688.2022.00097

[3]

Qie K, Wang J, Li Z, Wang Z, Luo W. 2024. Recognition of occluded pedestrians from the driver’s perspective for extending sight distance and ensuring driving safety at signal-free intersections. Digital Transportation and Safety 3:65−74

doi: 10.48130/dts-0024-0007
[4]

Wang Q, Li X, Lu M. 2023. An improved traffic sign detection and recognition deep model based on YOLOv5. IEEE Access 11:54679−91

doi: 10.1109/ACCESS.2023.3281551
[5]

Lai H, Chen L, Liu W, Yan Z, Ye S. 2023. STC-YOLO: small object detection network for traffic signs in complex environments. Sensors 23:5307

doi: 10.3390/s23115307
[6]

Chu J, Zhang C, Yan M, Zhang H, Ge T. 2023. TRD-YOLO: a real-time, high-performance small traffic sign detection algorithm. Sensors 23:3871

doi: 10.3390/s23083871
[7]

de la Escalera A, Moreno LE, Salichs MA, Armingol JM. 1997. Road traffic sign detection and classification. IEEE Transactions on Industrial Electronics 44:848−59

doi: 10.1109/41.649946
[8]

Fleyeh H. 2004. Color detection and segmentation for road and traffic signs. IEEE Conference on Cybernetics and Intelligent Systems, Singapore, 1−3 December 2004. USA: IEEE. pp. 809−14. doi: 10.1109/iccis.2004.1460692

[9]

Maldonado-Bascón S, Lafuente-Arroyo S, Gil-Jimenez P, Gómez-Moreno H, López-Ferreras F. 2007. Road-sign detection and recognition based on support vector machines. IEEE Transactions on Intelligent Transportation Systems 8:264−78

doi: 10.1109/TITS.2007.895311
[10]

Cireşan D, Meier U, Masci J, Schmidhuber J. 2011. A committee of neural networks for traffic sign classification. The 2011 International Joint Conference on Neural Networks. San Jose, CA, USA, 31 July − 5 August 2011. USA: IEEE. pp. 1918−21. 10.1109/ijcnn.2011.6033458

[11]

Girshick R, Donahue J, Darrell T, Malik J. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OH, USA, 23−28 June 2014. USA: IEEE. pp. 580−87. doi: 10.1109/cvpr.2014.81

[12]

Girshick R. 2015. Fast R-CNN. 2015 IEEE International Conference on Computer Vision (ICCV). Santiago, Chile, 7−13 December 2015. USA: IEEE. pp. 1440−48. DOI: 10.1109/iccv.2015.169

[13]

Ren S, He K, Girshick R, Sun J. 2017. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39:1137−49

doi: 10.1109/TPAMI.2016.2577031
[14]

Redmon J, Divvala S, Girshick R, Farhadi A. 2016. You only look once: unified, real-time object detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27-30 June 2016. USA: IEEE. pp. 779−88. doi: 10.1109/cvpr.2016.91

[15]

Redmon J, Farhadi A. 2017. YOLO9000: better, faster, stronger. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21-26 July 2017. USA: IEEE. pp. 6517−25. doi: 10.1109/cvpr.2017.690

[16]

Redmon J, Farhadi A. 2018. YOLOv3: an incremental improvement. arXiv Preprint:1804.02767

doi: 10.48550/arXiv.1804.02767
[17]

Bochkovskiy A, Wang CY, Liao HY M. 2020. YOLOv4: optimal speed and accuracy of object detection. arXiv Preprint:2004.10934

doi: 10.48550/arXiv.2004.10934
[18]

Chen B, Fan X. 2024. MSGC-YOLO: an improved lightweight traffic sign detection model under snow conditions. Mathematics 12:1539

doi: 10.3390/math12101539
[19]

Zhang LJ, Fang JJ, Liu YX, Hai FL, Rao ZQ, et al. 2024. CR-YOLOv8: multiscale object detection in traffic sign images. IEEE Access 12:219−28

doi: 10.1109/ACCESS.2023.3347352
[20]

Kim W. 2009. Cloud computing: today and tomorrow. The Journal of Object Technology 8:65−72

doi: 10.5381/jot.2009.8.1.c4
[21]

Luo Y, Ci Y, Jiang S, Wei X. 2024. A novel lightweight real-time traffic sign detection method based on an embedded device and YOLOv8. Journal of Real-Time Image Processing 21:24

doi: 10.1007/s11554-023-01403-7
[22]

Artamonov NS, Yakimov PY. 2018. Towards real-time traffic sign recognition via YOLO on a mobile GPU. Journal of Physics: Con ference Series 1096:012086

doi: 10.1088/1742-6596/1096/1/012086
[23]

He K, Zhang X, Ren S, Sun J. 2015. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 37:1904−16

doi: 10.1109/TPAMI.2015.2389824
[24]

Lin TY, Dollár P, Girshick R, He K, Hariharan B, et al. 2017. Feature pyramid networks for object detection. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21−26 July 2017. USA: IEEE. pp. 936−44. doi: 10.1109/cvpr.2017.106

[25]

Li H, Xiong P, An J, Wang L. 2018. Pyramid attention network for semantic segmentation. arXiv Preprint:1805.10180

doi: 10.48550/arXiv.1805.10180
[26]

Zheng Z, Wang P, Liu W, Li J, Ye R, et al. 2020. Distance-IoU loss: faster and better learning for bounding box regression. Proceedings of the AAAI Conference on Artificial Intelligence 34:12993−3000

doi: 10.1609/aaai.v34i07.6999
[27]

Soydaner D. 2022. Attention mechanism in neural networks: where it comes and where it goes. Neural Computing and Applications 34:13371−85

doi: 10.1007/s00521-022-07366-3
[28]

Sun Z, Yang H, Zhang Z, Liu J, Zhang X. 2022. An improved YOLOv5-based tapping trajectory detection method for natural rubber trees. Agriculture 12:1309

doi: 10.3390/agriculture12091309
[29]

Hou Q, Zhou D, Feng J. 2021. Coordinate attention for efficient mobile network design. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20−25 June 2021. USA: IEEE. pp. 13708−17. doi: 10.1109/cvpr46437.2021.01350

[30]

Zhang YF, Ren W, Zhang Z, Jia Z, Wang L, et al. 2022. Focal and efficient IOU loss for accurate bounding box regression. Neurocomputing 506:146−57

doi: 10.1016/j.neucom.2022.07.042
[31]

Zhang J, Zou X, Kuang LD, Wang J, Sherratt RS, et al. 2022. CCTSDB 2021: a more comprehensive traffic sign detection benchmark. Human-centric Computing and Information Sciences 12:23

doi: 10.22967/HCIS.2022.12.023
[32]

Molchanov P, Tyree S, Karras T, Aila T, Kautz J. 2016. Pruning convolutional neural networks for resource efficient inference. arXiv Preprint:1611.06440

doi: 10.48550/arXiv.1611.06440
[33]

Han S, Mao H, Dally WJ. 2015. Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv Preprint:1510.00149

doi: 10.48550/arXiv.1510.00149
[34]

Rastegari M, Ordonez V, Redmon J, Farhadi A. 2016. XNOR-net: ImageNet classification using binary convolutional neural networks. In Computer Vision – ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, eds. Leibe B, Matas J, Sebe N, Welling M. vol. 9908. Cham: Springer. pp. 525−42. doi: 10.1007/978-3-319-46493-0_32

[35]

Li Z, Ni B, Zhang W, Yang X, Gao W. 2017. Performance guaranteed network acceleration via high-order residual quantization. 2017 IEEE InternationalConference on Computer Vision (ICCV), Venice, Italy, 22−29 October 2017. USA: IEEE. pp. 2584−92. doi: 10.1109/iccv.2017.282

[36]

Romero A, Ballas N, Kahou SE, Chassang A, Gatta C, et al. 2014. FitNets: hints for thin deep nets. arXiv Preprint:1412.6550

doi: 10.48550/arXiv.1412.6550
[37]

Kim J, Park S, Kwak N. 2018. Paraphrasing complex network: network compression via factor transfer. arXiv Preprint:1802.04977

doi: 10.48550/arXiv.1802.04977