Search
2022 Volume 2
Article Contents
ARTICLE   Open Access    

Applying Artificial Intelligence (AI) to improve fire response activities

More Information
  • This research discusses how to use a real-time Artificial Intelligence (AI) object detection model to improve on-site incident command and personal accountability in fire response. We utilized images of firegrounds obtained from an online resource and a local fire department to train the AI object detector, YOLOv4. Consequently, the real-time AI object detector can reach more than ninety percent accuracy when counting the number of fire trucks and firefighters on the ground utilizing images from local fire departments. Our initial results indicate AI provides an innovative method to maintain fireground personnel accountability at the scenes of fires. By connecting cameras to additional emergency management equipment (e.g., cameras in fire trucks and ambulances or drones), this research highlights how this technology can be broadly applied to various scenarios of disaster response, thus improving on-site incident fire command and enhancing personnel accountability on the fireground.
  • 加载中
  • [1]

    Baggaley K. 2017. Drones are fighting wildfires in some very surprising ways. NBC News, November 16, 2017. www.nbcnews.com/mach/science/drones-are-fighting-wildfirefires-some-very-surprising-ways-ncna820966 (Accessed January 24, 2022)

    [2]

    Gabbert B. 2018. Drone flying at night detects spot fire. Wildfire Today, August 15, 2018. https://wildfiretoday.com/2018/08/15/drone-flying-at-night-detects-spot-fire/ (Accessed January 24, 2022)

    [3]

    Weichenthal S, Hatzopoulou M, Brauer M. 2019. A picture tells a thousand. exposures: Opportunities and challenges of deep learning image analyses in exposure science and environmental epidemiology. Environment International 122:3−10

    doi: 10.1016/j.envint.2018.11.042

    CrossRef   Google Scholar

    [4]

    Norman J. 1998. Fire officer's handbook of tactics. 2nd edition. Saddle Brook: Fire Engineering Books & Videos. pp. 15−35

    [5]

    Smith JP. 2002. Strategic and tactical considerations on the fireground. Upper Saddle River: Pearson Education, Inc. pp. 61−84

    [6]

    Barr R, Eversole J. 2003. The fire chief's handbook. 6th edition. Tulsa, USA: PennWell Corporation

    [7]

    Useem M, Cook J, Sutton L. 2005. Developing leaders for decision making under stress: Wildland firefighters in the South Canyon Fire and its aftermath. Academy of Management Learning & Education 4:461−85

    doi: 10.5465/AMLE.2005.19086788

    CrossRef   Google Scholar

    [8]

    Klein G, Calderwood R, Clinton-Cirocco A. 2010. Rapid decision making on the fire ground: The original study plus a postscript. Journal of Cognitive Engineering and Decision Making 4:186−209

    doi: 10.1518/155534310X12844000801203

    CrossRef   Google Scholar

    [9]

    Calkin D, Thompson M, Finney M, Hyde K. 2011. A real-time risk assessment tool supporting wildland fire decision-making. Journal of Forestry. 2011:274–80 www.fs.fed.us/rm/pubs_other/rmrs_2011_calkin_d003.pdf

    [10]

    Holmes TP, Calkin DE. 2013. Econometric analysis of fire suppression production functions for large wildland fires. International Journal of Wildland Fire 22:246−55

    doi: 10.1071/WF11098

    CrossRef   Google Scholar

    [11]

    Martell DL. 2015. A review of recent forest and wildland fire management decision support systems research. Current Forestry Reports 1:128−37

    doi: 10.1007/s40725-015-0011-y

    CrossRef   Google Scholar

    [12]

    Ren S, He K, Girshick R, Sun J. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems 28:91–99. https://arxiv.org/pdf/1506.01497.pdf

    [13]

    Chen Y, Li W, Sakaridis C, Dai D, Van Gool L. 2018. Domain adaptive faster R-CNN for object detection in the wild. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, Utah, USA. 2018. pp. 3339–48. USA: IEEE https://doi.org10.1109/CVPR.2018.00352

    [14]

    Wu M, Yue H, Wang J, Huang Y, Liu M, et al. 2020. Object detection based on RGC mask R-CNN. IET Image Processing 14:1502−8

    doi: 10.1049/iet-ipr.2019.0057

    CrossRef   Google Scholar

    [15]

    Sun P, Zhang R, Jiang Y, Kong T, Xu C, et al. 2021. Sparse R-CNN: End-to-end object detection with learnable proposals. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 2021. pp. 14454–63. USA: IEEE https://doi.org/10.1109/CVPR46437.2021.01422

    [16]

    Redmon J, Divvala S, Girshick R, Farhadi A. 2016. You only look once: Unified, real-time object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Nevada, USA, pp. 779–88. USA: IEEE www.cv-foundation.org/openaccess/content_cvpr_2016/html/Redmon_You_Only_Look_CVPR_2016_paper.html

    [17]

    Redmon J, Farhadi A. 2017. YOLO9000: Better, faster, stronger. 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, USA, 2016, pp. 6517–25. USA: IEEE https://doi.org/10.1109/CVPR.2017.690

    [18]

    Redmon J, Farhadi A. 2018. YOLOv3: An incremental improvement. ArXiv: 1804.02767. https://arxiv.org/pdf/1804.02767.pdf

    [19]

    Bochkovskiy A, Wang CY, Liao HYM. 2020. YOLOv4: Optimal speed and accuracy of object detection. ArXiv:2004.10934 https://arxiv.org/pdf/2004.10934.pdf

    [20]

    Li Y, Wang H, Dang LM, Nguyen TN, Han D, et al. 2020. A deep learning-based hybrid framework for object detection and recognition in autonomous driving. IEEE Access 8:194228−39

    doi: 10.1109/ACCESS.2020.3033289

    CrossRef   Google Scholar

    [21]

    Sozzi M, Cantalamessa S, Cogato A, Kayad A, Marinello F. 2021. Grape yield spatial variability assessment using YOLOv4 object detection algorithm. In Precision agriculture ’21, ed. Stafford JV. Wageningen: Wageningen Academic Publishers. pp. 193−98. https://doi.org/10.3920/978-90-8686-916-9_22

    [22]

    Kajabad EN, Begen P, Nizomutdinov B, Ivanov S. 2021. YOLOv4 for urban object detection: Case of electronic inventory in St. Petersburg. 2021 28th Conference of Open Innovations Association (FRUCT), 2021, Moscow, Russia, pp. 316–21. USA: IEEE https://doi.org/10.23919/FRUCT50888.2021.9347622

    [23]

    Cai C, Nishimura T, Hwang J, Hu X, Kuroda A. 2021. Asbestos detection with fluorescence microscopy images and deep learning. Sensors 21:4582

    doi: 10.3390/s21134582

    CrossRef   Google Scholar

    [24]

    Hu P, Cai C, Yi H, Zhao J, Feng Y, et al. 2022. Aiding airway obstruction diagnosis with computational fluid dynamics and convolutional neural network: A new perspective and numerical case study. Journal of Fluids Engineering 144:081206

    doi: 10.1115/1.4053651

    CrossRef   Google Scholar

    [25]

    Deng J, Dong W, Socher R, Li L, Li K, et al. 2009. Imagenet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 2009, pp. 248–55. USA: IEEE. https://doi.org/10.1109/cvprw.2009.5206848

    [26]

    Wang CY, Liao HYM, Yeh I, Wu YH, Chen PY, et al. 2020. CSPNet: A new backbone that can enhance learning capability of CNN. IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 2020, pp. 390–91. USA: IEEE. https://doi.org/10.1109/CVPRW50498.2020.00203

    [27]

    Liu S, Qi L, Qin H, Shi J, Jia J. 2018. Path aggregation network for instance segmentation. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 8759–68. USA: IEEE. https://doi.org/10.1109/CVPR.2018.00913

    [28]

    He K, Zhang X, Ren S, Sun J. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 2015, pp. 1026–34. USA: IEEE. https://doi.org/10.1109/ICCV.2015.123

    [29]

    He K, Zhang X, Ren S, Sun J. 2015. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence 37:1904−16

    doi: 10.1109/TPAMI.2015.2389824

    CrossRef   Google Scholar

    [30]

    Goutte C, Gaussier E. 2005. A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In Advances in Information Retrieval. ECIR 2005. Lecture Notes in Computer Science, vol 3408: XVIII, 574. Heidelberg: Springer, Berlin, Heidelberg. pp. 345−59. https://doi.org/10.1007/978-3-540-31865-1_25

  • Cite this article

    Chang RH, Peng YT, Choi S, Cai C. 2022. Applying Artificial Intelligence (AI) to improve fire response activities. Emergency Management Science and Technology 2:7 doi: 10.48130/EMST-2022-0007
    Chang RH, Peng YT, Choi S, Cai C. 2022. Applying Artificial Intelligence (AI) to improve fire response activities. Emergency Management Science and Technology 2:7 doi: 10.48130/EMST-2022-0007

Figures(2)  /  Tables(2)

Article Metrics

Article views(15135) PDF downloads(1679)

ARTICLE   Open Access    

Applying Artificial Intelligence (AI) to improve fire response activities

Abstract: This research discusses how to use a real-time Artificial Intelligence (AI) object detection model to improve on-site incident command and personal accountability in fire response. We utilized images of firegrounds obtained from an online resource and a local fire department to train the AI object detector, YOLOv4. Consequently, the real-time AI object detector can reach more than ninety percent accuracy when counting the number of fire trucks and firefighters on the ground utilizing images from local fire departments. Our initial results indicate AI provides an innovative method to maintain fireground personnel accountability at the scenes of fires. By connecting cameras to additional emergency management equipment (e.g., cameras in fire trucks and ambulances or drones), this research highlights how this technology can be broadly applied to various scenarios of disaster response, thus improving on-site incident fire command and enhancing personnel accountability on the fireground.

    • Incident Commanders (ICs) are required to make decisions under time constraints during extreme conditions on the fireground. The decision-making processes, or the 'size-up' processes, require the ICs to collect information rapidly about the current resources and personnel at the scene of a fire. This research is designed to provide the firefighting ICs with enhanced access, and a better grasp of the up-to-date information of resources and personnel on-site, to consequently help them make more effective decisions on the fireground.

      The researchers will discuss how to train Artificial Intelligence (AI) object detectors to calculate the fire apparatus and firefighters on the ground. Firefighters have recently started utilizing various new technological devices (e.g., drones) to capture real-time images and videos for firefighting purposes[1,2]. With these images and videos, deep learning image analyses, which are able to extract high-level features from the raw images, have been successfully applied to solve various environmental health issues[3]. Given the early studies in this field, the mechanisms of these technologies are not well understood. As such, early research in the field indicates optimism that we believe these technologies can be widely adopted by fire departments.

      To advance this literature, we train an AI object detector to automatically calculate fire trucks and firefighters at the scene of a fire with the use of image resources from Google Images and a local fire department. We then compare the AI performances by utilizing the two resources. First, the system facilitates the collection and use of information to improve situational awareness at the scene of a fire. Second, by calculating the available resources and personnel, responders will no longer need to walk across an active scene of a fire to locate resources. Therefore, the technology considerably reduces the risk of lives of firefighters performing practical and expected activities on the fireground. Finally, by calculating the number of firefighters using the trained object detection model and by comparing images taken at different response periods, the ICs will have the ability to conduct an accountability system on a broad area. The system can calculate the number of firefighters at the scene every couple of seconds and automatically warn the ICs if fewer firefighters are found. Consequently, this research can assist the ICs in tracking every firefighter on location and immediately identify those responders who may need to be rescued which helps mitigate some of the safety and health hazards associated with firefighting.

    • Firefighting is an activity that relies heavily on teamwork. As a result, leaders in fire departments are required to quickly grasp environmental information, make decisions based on the environmental changes, and eventually orchestrate the on-site firefighting activities. The above processes are called 'size-up' for fire officers, which includes obtaining critical information, such as construction, life hazard, water supply, apparatus, and workforce on-site[4,5].

      To fight fires, the ICs must consider the fuel, topography, road access, structural exposure, water supplies, and the number and types of suppression resources dispatched[6]. Since fire departments normally respond to fires that impact a broad area – many of which typically spread quickly and last for a prolonged period of time – the ICs are expected to make decisions based on safety, speed, and suppression[7]. Safety aims to prevent possible harm to firefighters by tracking their locations and then directing them to safe places. Speed refers to realizing the situational awareness cues (such as weather, risks to life, resources, and so on) to make sound and timely decisions[8]. Suppression means formulating technical decisions to suppress a fire by establishing how many firefighters are required and what resources are available at the scene of the fire.

      These situational conditions demonstrate that a real-time firefighting resource and personnel management system is imperative for fire decision-makers. With enhanced awareness of suppression resources on the sites, the ICs would have to contend with fewer uncertainties in their decision-making process at the fireground. Moreover, the improved aggregate of information provides an easier path to implement preferred suppression strategies in conditions that require consideration of multiple decision factors[911]. As a result, the objective of this research project is to assist IC's abilities to improve the management of on-site fire apparatus and firefighters. To reach this objective, we incorporate an AI object detection model to facilitate the fire incident command and firefighter accountability.

      Convolutional Neural Networks (CNN) have been applied successfully in many areas, especially in object detection[1215]. Over the last few decades, the deep neural network has become one of the most powerful tools for solving machine learning problems. CNN has become one of the most popular deep neural networks due to its ground breaking results compared to others. CNN can reduce the number of parameters to solve complex tasks, and obtain abstract features in deeper layers from the raw images. One popular state-of-the-art CNN-based model for detecting objects in an image is 'You Only Look Once' or YOLO proposed by Redmon et al.[16]. YOLO version 3 (YOLOv3) expands on its previous version, YOLO version 2 (YOLOv2), by utilizing a Darknet-53 (a CNN algorithm with 53 convolutional layers) as its backbone compared to its previous version that utilized Darknet-19 (19 convolutional layers)[17,18]. Depending on the architecture, a CNN model contains multiple convolutional layers, where we apply multiple filters to extract features from images. Although the precision has been greatly improved in YOLOv3 compared to YOLOv2 due to the added convolutional layers, these additional layers also make it slow. YOLO version 4 (YOLOv4) has been developed to improve both the precision and speed of YOLOv3, and it is considered one of the most accurate real-time (23−38 frames per second) neural network detectors to date[19]. YOLOv4 has been successfully applied in various industries after it was published in 2020, including autonomous driving vehicles, agriculture, electronics and public health[2024]. Due to its efficiency and power, everyone can use a 1080 Ti or higher Graphics Processing Unit GPU to train a fast and accurate object detector. In addition, YOLOv4 is an open-source code, which presents an opportunity to create a readily accessible application for personal computers, smartphones and tablets. Various image resources are available for training AI software; however, they might not be suitable for characterizing the local features.

      Therefore, the researchers utilize the YOLOv4 to recognize and count on-site fire trucks and firefighters, and to differentiate the firefighters from non-firefighters. We tested two visual datasets used for training the model, including: (1) images downloaded from Google Images; and (2) images obtained from a local fire department in Taiwan. Then, we discussed the potential of using an AI object detection model to improve on-site IC and personal accountability

    • We compared the YOLOv4 performance at the threshold value (or probability of detection) of 0.5 (@0.5) using the two visual datasets. The results for testing images showed that the mean average precision (mAP) @0.5 using the visual dataset 2 from on-site images from a local fire department in Taiwan achieved 91%, which is much higher than the mAP@0.5 of 27% using the visual dataset 1 from Google Images. The four model evaluation metrics (Accuracy, precision, recall, and F1-score) are summarized in Table 1. For both training and testing datasets, the model performances using visual dataset 2 were much higher than using visual dataset 1.

      Table 1.  Comparison of model performances using the two datasets.

      TrainingTesting
      Visual
      dataset 1
      (Internet)
      Visual
      dataset 2
      (Taiwan onsite)
      Visual
      dataset 1
      (Internet)
      Visual
      dataset 2
      (Taiwan onsite)
      Accuracy0.780.960.370.71
      Precision0.770.970.510.83
      Recall0.830.970.380.77
      F1-score0.800.970.440.80

      We compared the average precision for each class using the two datasets as shown in Table 2. The results show that using the on-site images from the local Taiwan fire department could better differentiate the firefighters from non-firefighters, indicating that using the local fireground images was preferable in order to capture local features.

      Table 2.  Comparison of the average precision for each class using the two datasets.

      TrainingTesting
      Visual
      dataset 1
      (Internet)
      Visual
      dataset 2
      (Taiwan on-site)
      Visual
      dataset 1
      (Internet)
      Visual
      dataset 2
      (Taiwan on-site)
      Firefighter0.880.990.400.75
      Non-firefighter0.730.980.070.98
      Firetruck0.571.000.351.00
    • Based on the previous results, we found that using a well-trained AI object detection model can accurately identify and count the number of firefighters and fire apparatus numbers at the location of a fire. In Fig. 1, we brainstormed the possible future fire safety management practice procedures using AI. When fire companies initially arrive at the scene of a fire, the cameras begin capturing images/videos and then transfer them to Cloud servers at an off-site location. The cameras can be installed on drones, vehicles, and firefighters. Then, the on-site images/videos are sent to trained AI models and the visual database from the Cloud. Finally, ICs on-site receive needed decision-making information, including the number of fire trucks, number of firefighters, other objectives, and number of hazardous activities. As a result, we proposed three possible applications of this research in the future.

      Figure 1. 

      Brainstorm of the procedures of fire safety management practices using AI.

      Firstly, this research would increase situational awareness on the ground. If researchers train the AI to recognize additional objects, for example, such as different types of vehicles, fire hydrants on the streets, and specific uniforms worn by firefighters, Emergency Medical Technicians, and police officers, the fire commanders could quickly and precisely grasp the critical information (e.g., number of fire apparatus) on-site.

      Also, by connecting cameras to wireless or satellite networks, the system with an AI object detection model incorporated in this article will be able to continuously calculate and compare the number of firefighters and fire apparatus on the ground. This method can be applied to maintain the fireground personnel's accountability in a continuous protocol. Once a firefighter is no longer recognized at the scene, the IC can contact the individual for the first time to determine if they need to send additional personnel to search and rescue the firefighter.

      Last but not least, by continuously monitoring firefighting activities, the software will be able to identify dangerous activities on-site in real-time. As previously mentioned, AI-powered image recognition has been applied to improve occupational safety in different fields (e.g., building construction). When additional training images and/or videos are added to the datasets, AI will be able to identify those signs of fatigue from firefighters. For instance, firefighters’ helmets touching the ground is a clear sign of fatigue, and the AI software will identify the activity and immediately notify other firefighters on the ground.

    • This pilot study successfully utilized images of firegrounds for training an AI model to count the number of firefighters, non-firefighters, and firetrucks in real-time. As firefighters lost on the ground is one of the major causes of American firefighter fatalities, fire departments can apply this research to improve the fireground personnel safety in their jurisdictions.

      Moreover, the results showed that the trained AI object detector using images obtained from a local fire department performed better than those downloaded from Google Images. Therefore, when applying this research to local fire departments, we recommend that fire departments establish their image database, with local firefighter and fire apparatus images to improve the AI model performances tailored to fit regional characteristics.

      Finally, we encourage researchers to focus on those practical implications we proposed in this article. With broader and more advanced AI applications developed, firefighting communities can use this technology to increase situational awareness, personnel accountability, and incident command on the ground.

    • Figure 2 summarizes the flowchart of this study. We explored two methods of collecting images for training the model: (1) downloading images from online sources; and (2) obtaining images from the scenes of fires. Deng et al. showed that, with a clean (clean annotations) set of full-resolution (minimum of 400 × 350 pixel resolution) images, object recognition can be more accurate, especially by exploiting more feature-level information[25]. Therefore, we downloaded full-resolution images from Google Images by searching firefighters and fire trucks. We used only those images without copyright in this study. In total, 612 images were obtained from online sources for the first visual dataset. Moreover, we obtained on-site images from fire events in Taiwan. A total of 152 images were obtained for the second visual dataset. After collecting the images, we annotated each image by three classes: firefighters, non-firefighters, and fire trucks. Then, the annotated images were used for training the YOLOv4 model for detection, and each visual dataset was done separately. YOLOv4 was compiled in Microsoft Visual Studio 2019 to run on Windows Operating System with GPU (GeForce GTX 1660 Ti with 16 GB-VRAM), CUDNN_HALF, and OpenCV. Finally, we compared the model performances using the two visual datasets and discussed the implications of this study. For each dataset, the images were partitioned into training and testing sets, with an 80%−20% split.

      Figure 2. 

      Flowchart of the study.

    • YOLOv4 consists of three main blocks, including the 'backbone', 'neck', and 'head'[19]. The model implements the Cross Stage Partial Network (CSPNet) backbone method to extract features[26], where there are 53 convolutional layers for accurate image classification, also known as CSPDarknet53. CSPDarknet53 can largely reduce the complexity of the target problem while still maintaining accuracy. The 'neck' is a layer between the 'backbone' and 'head', acting as feature aggregation. YOLOv4 uses the Path Aggregation Network (PANet)[27] and Spatial Pyramid Pooling (SPP) to set apart the important features obtained from the 'backbone'[28]. The PANet utilizes bottom-up path augmentation to aggregate features for image segmentation. The SPP enables YOLOv4 to take any size of the input image. The 'head' uses dense prediction for anchor-based detection that helps divide the image into multiple cells and inspects each cell to find the probability of having an object using the post-processing techniques[29].

      The model performances were analyzed using the four metrics: Accuracy, precision, recall, and F1-score[30]. A confusion matrix for binary classification includes four possible outcomes: true-positive (TP), true-negative (TN), false-positive (FP), and false-negative (FN). TP indicates the number of objects successfully detected by the algorithm. TN indicates the number of non-objects successfully identified as not an object. FP indicates the number of non-objects that are falsely identified as objects. FN indicates the number of objects falsely identified as non-objects. Accuracy represents the overall performance of the model; or the portion of TP and TN over all data points. Precision represents the model’s ability to identify relevant data points or the proportion of data points classified as true, which are actually true. Recall is described as the model’s ability to find all relevant data points. It is the proportion of the total number of correctly identified data points given the overall relevant data points. F1-score is equated by the balance between precision and recall. Maximizing precision often comes at the expense of recall, and vice-versa. Determining the F1-score is useful in this assessment to ensure optimal precision and recall scores. Their calculations are as follows:

      $ Accuracy=\frac{TP+TN}{TP+TN+FP+FN} $
      $ Precision=\frac{TP}{TP+FP} $
      $ Recall=\frac{TP}{TP+FN} $
      $ {F}_{1}=\frac{2*Recall*Precision}{Recall+Precision} $
      • The authors appreciate the financial support provided by the Hudson College of Public Health at The University of Oklahoma Health Sciences Center, and the Presbyterian Health Foundation (PHF, grant number 20000243) for exploring the AI applications.

      • The authors declare that they have no conflict of interest.

      • Copyright: © 2022 by the author(s). Published by Maximum Academic Press on behalf of Nanjing Tech University. This article is an open access article distributed under Creative Commons Attribution License (CC BY 4.0), visit https://creativecommons.org/licenses/by/4.0/.
    Figure (2)  Table (2) References (30)
  • About this article
    Cite this article
    Chang RH, Peng YT, Choi S, Cai C. 2022. Applying Artificial Intelligence (AI) to improve fire response activities. Emergency Management Science and Technology 2:7 doi: 10.48130/EMST-2022-0007
    Chang RH, Peng YT, Choi S, Cai C. 2022. Applying Artificial Intelligence (AI) to improve fire response activities. Emergency Management Science and Technology 2:7 doi: 10.48130/EMST-2022-0007

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return