[1]

Bao L, Wang Q, Jiang Y. 2021. Review of digital twin for intelligent transportation system. 2021 International Conference on Information Control, Electrical Engineering and Rail Transit (ICEERT), Lanzhou, China, 30 October 2021 − 01 November 2021. USA: IEEE. pp. 309−15 doi: 10.1109/ICEERT53919.2021.00064

[2]

Martínez-Gutiérrez A, Díez-González J, Ferrero-Guillén R, Verde P, Álvarez R, et al. 2021. Digital twin for automatic transportation in industry 4.0. Sensors 21(10):3344

doi: 10.3390/s21103344
[3]

Kušić K, Schumann R, Ivanjko E. 2023. A digital twin in transportation: Real-time synergy of traffic data streams and simulation for virtualizing motorway dynamics. Advanced Engineering Informatics 55:101858

doi: 10.1016/j.aei.2022.101858
[4]

Wang Z, Gupta R, Han K, Wang H, Ganlath A, et al. 2022. Mobility digital twin: Concept, architecture, case study, and future challenges. IEEE Internet of Things Journal 9(18):17452−67

doi: 10.1109/JIOT.2022.3156028
[5]

Datondji SRE, Dupuis Y, Subirats P, Vasseur P. 2016. A survey of vision-based traffic monitoring of road intersections. IEEE transactions on intelligent transportation systems 17(10):2681−98

doi: 10.1109/TITS.2016.2530146
[6]

Zimmer W, Birkner J, Brucker M, Tung Nguyen H, Petrovski S, et al. 2023. InfraDet3D: multi-Modal 3D object detection based on roadside infrastructure Camera and LiDAR sensors. 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 4−7 June 2023. USA: IEEE. pp. 1−8 doi: 10.1109/IV55152.2023.10186723

[7]

Yoo JH, Kim Y, Kim J, Choi JW. 2020. 3D-CVF: generating joint Camera and LiDAR features using cross-view spatial feature fusion for 3D object detection. In Computer vision–ECCV 2020: 16th European conference. Cham: Springer. pp. 720−36 doi: 10.1007/978-3-030-58583-9_43

[8]

Yurtsever E, Lambert J, Carballo A, Takeda K. 2020. A survey of autonomous driving: common practices and emerging technologies. IEEE Access 8:58443−69

doi: 10.1109/ACCESS.2020.2983149
[9]

Bai Z, Wu G, Qi X, Liu Y, Oguchi K, et al. 2022. Infrastructure-based object detection and tracking for cooperative driving automation: A survey. 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, 4−9 June 2022. USA: IEEE. pp. 1366−73 doi: 10.1109/IV51971.2022.9827461

[10]

Bijelic M, Gruber T, Mannan F, Kraus F, Ritter W, et al. 2020. Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13−19 June 2020. USA: IEEE. pp. 11682−92 doi: 10.1109/CVPR42600.2020.01170

[11]

Geiger A, Lenz P, Urtasun R. 2012. Are we ready for autonomous driving? The KITTI vision benchmark suite. 2012 IEEE conference on computer vision and pattern recognition, Providence, RI, USA, 16−21 June 2012. USA: IEEE. pp. 3354−61 doi: 10.1109/CVPR.2012.6248074

[12]

Klein LA. 2024. Roadside sensors for traffic management. IEEE Intelligent Transportation Systems Magazine 16(4):21−44

doi: 10.1109/MITS.2023.3346842
[13]

Guerrero-Ibáñez J, Zeadally S, Contreras-Castillo J. 2018. Sensor technologies for intelligent transportation systems. Sensors 18(4):1212

doi: 10.3390/s18041212
[14]

Bassford M, Painter B. 2015. Development of an intelligent Fisheye camera. 2015 International Conference on Intelligent Environments, Prague, Czech Republic, 15−17 July 2015. USA: IEEE. pp. 160−63 doi: 10.1109/IE.2015.34

[15]

Li Y, Ibanez-Guzman J. 2020. Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems. IEEE Signal Processing Magazine 37(4):50−61

doi: 10.1109/MSP.2020.2973615
[16]

Liu Y, Wang Z, Han K, Shou Z, Tiwari P, et al. 2020. Sensor fusion of camera and cloud digital twin information for intelligent vehicles. 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October 2020 − 13 November 2020. USA: IEEE. pp. 182−87 doi: 10.1109/iv47402.2020.9304643

[17]

Zheng O, Abdel-Aty M, Yue L, Abdelraouf A, Wang Z, et al. 2024. CitySim: a drone-based vehicle trajectory dataset for safety-oriented research and digital twins. Transportation Research Record 2678(4):606−21

doi: 10.1177/03611981231185768
[18]

He J, Li P, An X, Wang C. 2024. A reconstruction methodology of dynamic construction site activities in 3D digital twin models based on camera information. Buildings 14(7):2113

doi: 10.3390/buildings14072113
[19]

Wojke N, Bewley A, Paulus D. 2017. Simple online and realtime tracking with a deep association metric. 2017 IEEE international conference on image processing (ICIP), Beijing, China, 17−20 September 2017. USA: IEEE. pp. 3645−49 doi: 10.1109/ICIP.2017.8296962

[20]

Bai Z, Nayak SP, Zhao X, Wu G, Barth MJ, et al. 2023. Cyber mobility mirror: a deep learning-based real-world object perception platform using roadside LiDAR. IEEE Transactions on Intelligent Transportation Systems 24(9):9476−89

doi: 10.1109/TITS.2023.3268281
[21]

Young SE, Bensen EA, Zhu L, Day C, Lott JS, et al. 2022. Concept of operations of next-generation traffic control utilizing infrastructure-based cooperative perception. International Conference on Transportation and Development 2022, May 31–June 3, 2022, Seattle, Washington. USA: American Society of Civil Engineers. pp. 93−104 doi: 10.1061/9780784484326

[22]

Chen Y, Zheng L, Tan Z. 2024. Roadside LiDAR placement for cooperative traffic detection by a novel chance constrained stochastic simulation optimization approach. Transportation Research Part C: Emerging Technologies 167:104838

doi: 10.1016/j.trc.2024.104838
[23]

He K, Zhang X, Ren S, Sun J. 2015. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence 37(9):1904−16

doi: 10.1109/TPAMI.2015.2389824
[24]

Jaiswal SK, Agrawal R. 2024. A Comprehensive Review of YOLOv5: Advances in Real-Time Object Detection. International Journal of Innovative Research in Computer Science & Technology 12(3):75−80

doi: 10.55524/ijircst.2024.12.3.12
[25]

Bochkovskiy, Wang CY, Liao HYM. 2020. YOLOv4: optimal speed and accuracy of object detection. arXiv Preprint 2004.10934

doi: 10.48550/arXiv.2004.10934
[26]

Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, et al. 2016. SSD: single shot MultiBox detector. In European Conference on Computer Vision. Cham: Springer. pp. 21−37 doi: 10.1007/978-3-319-46448-0_2

[27]

Lang AH, Vora S, Caesar H, Zhou L, Yang J, et al. 2019. Pointpillars: fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Long Beach, CA, USA, 15−20 June 2019. USA: IEEE. pp. 12697−705 doi: 10.1109/CVPR.2019.01298

[28]

Zhou Y, Tuzel O. 2018. Voxelnet: end-to-end learning for point cloud based 3d object detection. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18−23 June 2018. USA: IEEE. pp. 4490−99 doi: 10.1109/CVPR.2018.00472

[29]

Yan Y, Mao Y, Li B. 2018. Second: Sparsely embedded convolutional detection. Sensors 18(10):3337−38

doi: 10.3390/s18103337
[30]

Qi C R, Liu W, Wu C, Su H and Guibas L J. 2018. Frustum pointnets for 3D object detection from RGB-D data. In Proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, UT, USA, 18−23 June 2018. USA: IEEE. pp. 918−27 doi: 10.1109/CVPR.2018.00102