Search
2023 Volume 2
Article Contents
ARTICLE   Open Access    

Short-term inbound rail transit passenger flow prediction based on BILSTM model and influence factor analysis

More Information
  • Accurate and real-time passenger flow prediction of rail transit is an important part of intelligent transportation systems (ITS). According to previous studies, it is found that the prediction effect of a single model is not good for datasets with large changes in passenger flow characteristics and the deep learning model with added influencing factors has better prediction accuracy. In order to provide persuasive passenger flow forecast data for ITS, a deep learning model considering the influencing factors is proposed in this paper. In view of the lack of objective analysis on the selection of influencing factors by predecessors, this paper uses analytic hierarchy processes (AHP) and one-way ANOVA analysis to scientifically select the factor of time characteristics, which classifies and gives weight to the hourly passenger flow through Duncan test. Then, combining the time weight, BILSTM based model considering the hourly travel characteristics factors is proposed. The model performance is verified through the inbound passenger flow of Ningbo rail transit. The proposed model is compared with many current mainstream deep learning algorithms, the effectiveness of the BILSTM model considering influencing factors is validated. Through comparison and analysis with various evaluation indicators and other deep learning models, the results show that the R2 score of the BILSTM model considering influencing factors reaches 0.968, and the MAE value of the BILSTM model without adding influencing factors decreases by 45.61%.
  • 加载中
  • Appendix Long Short-Term and Bi-directional Long Short-Term Memory Neural Network.
  • [1]

    Jiang W, Zhang H, Long Y, Chen J, Sui Y, et al. 2021. GPS data in urban online ride-hailing: The technical potential analysis of demand prediction model. Journal of Cleaner Production 279:123706

    doi: 10.1016/j.jclepro.2020.123706

    CrossRef   Google Scholar

    [2]

    Ke J, Feng S, Zhu Z, Yang H, Ye J. 2021. Joint predictions of multi-modal ride-hailing demands: A deep multi-task multi-graph learning-based approach. Transportation Research Part C: Emerging Technologies 127:103063

    doi: 10.1016/j.trc.2021.103063

    CrossRef   Google Scholar

    [3]

    Rahman MH, Rifaat SM. 2021. Using spatio-temporal deep learning for forecasting demand and supply-demand gap in ride-hailing system with anonymised spatial adjacency information. IET Intelligent Transport Systems 15:941−57

    doi: 10.1049/itr2.12073

    CrossRef   Google Scholar

    [4]

    Zhang D, Xiao F, Shen M, Zhong S. 2021. DNEAT: A novel dynamic node-edge attention network for origin-destination demand prediction. Transportation Research Part C: Emerging Technologies 122:102851

    doi: 10.1016/j.trc.2020.102851

    CrossRef   Google Scholar

    [5]

    Elman JL. 1991. Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning 7:195−225

    doi: 10.1007/BF00114844

    CrossRef   Google Scholar

    [6]

    Rumelhart DE, Hinton GE, Williams RJ. 1986. Learning representations by back-propagating errors. Nature 323:533−36

    doi: 10.1038/323533a0

    CrossRef   Google Scholar

    [7]

    Schmidhuber J. 2015. Deep learning in neural networks: An overview. Neural Networks 61:85−117

    doi: 10.1016/j.neunet.2014.09.003

    CrossRef   Google Scholar

    [8]

    Yang D, Chen K, Yang M, Zhao X. 2019. Urban rail transit passenger flow forecast based on LSTM with enhanced long-term features. IET Intelligent Transport Systems 10:1475−82

    doi: 10.1049/iet-its.2018.5511

    CrossRef   Google Scholar

    [9]

    Zhang J, Chen F, Shen Q. 2019. Cluster-Based LSTM Network for Short-Term Passenger Flow Forecasting in Urban Rail Transit. IEEE Access 7:147653−71

    doi: 10.1109/ACCESS.2019.2941987

    CrossRef   Google Scholar

    [10]

    Yang X, Xue Q, Ding M, Wu J, Gao Z. 2021. Short-term prediction of passenger volume for urban rail systems: A deep learning approach based on smart-card data. International Journal of Production Economics 231:107920

    doi: 10.1016/j.ijpe.2020.107920

    CrossRef   Google Scholar

    [11]

    Ibrahim A, Hall F. 1994. Effect of adverse weather conditions on speed-flow-occupancy relationships. Transportation Research Record 1994:184−91

    Google Scholar

    [12]

    Brilon W, Ponzlet M. 1996. Variability of speed-flow relationships on German autobahns. Transportation Research Record 1555:91−98

    doi: 10.1177/0361198196155500112

    CrossRef   Google Scholar

    [13]

    Agarwal M, Maze T, Souleyrette R. 2005. Impacts of weather on urban freeway traffic flow characteristics and facility capacity. Proceedings of the 2005 Mid-Continent Transportation Research Symposium, Ames, Iowa, August 2005. pp. 1121−34.

    [14]

    Zhang D, Kabuka MR. 2018. Combining weather condition data to predict traffic flow: a GRU-based deep learning approach. IET Intelligent Transport Systems 12:578−85

    doi: 10.1049/iet-its.2017.0313

    CrossRef   Google Scholar

    [15]

    Li G, Yang Y, Qu X. 2020. Deep learning approaches on pedestrian detection in hazy weather. IEEE Transactions on Industrial Electronics 67:8889−99

    doi: 10.1109/TIE.2019.2945295

    CrossRef   Google Scholar

    [16]

    Liu L, Chen RC. 2017. A novel passenger flow prediction model using deep learning methods. Transportation Research Part C: Emerging Technologies 84:74−91

    doi: 10.1016/j.trc.2017.08.001

    CrossRef   Google Scholar

    [17]

    Hou Y, Deng Z, Cui H. 2021. Short-term traffic flow prediction with weather conditions: Based on deep learning algorithms and data fusion. Complexity 2021:6662959

    doi: 10.1155/2021/6662959

    CrossRef   Google Scholar

    [18]

    Liu L, Chen R, Zhu S. 2020. Impacts of weather on short-term metro passenger flow forecasting using a deep LSTM neural network. Applied Sciences 10:2962

    doi: 10.3390/app10082962

    CrossRef   Google Scholar

    [19]

    Zhang S, Zhang J, Yang L, Yin J, Gao Z. 2022. Spatial-temporal attention fusion network for short-term passenger flow prediction on holidays in urban rail transit systems. Machine Learning arXiv:2203.00007

    doi: abs/2203.00007

    CrossRef   Google Scholar

    [20]

    Yang J, Liu T, Li C, Tong W, Zhu Y. et al. 2021. MGSTCN: A Multi-Graph Spatio-Temporal Convolutional Network for Metro Passenger Flow Prediction. 2021 7th International Conference on Big Data Computing and Communications (BigCom), Deqing, China, 2021. pp. 164−71. USA: IEEE. https://doi.org/10.1109/BigCom53800.2021.00050.

    [21]

    Zhu H, Yang X, Wang Y. 2018. Prediction of Daily Entrance and Exit Passenger Flow of Rail Transit Stations by Deep Learning Method. Journal of Advanced Transportation 2018:6142724

    doi: 10.1155/2018/6142724

    CrossRef   Google Scholar

    [22]

    Ling X, Huang Z, Wang C, Zhang F, Wang P. 2018. Predicting subway passenger flows under different traffic conditions. Plos One 13:e0202707

    doi: 10.1371/journal.pone.0202707

    CrossRef   Google Scholar

    [23]

    Zhu K, Xun P, Li W, Li Z, Zhou R. 2019. Prediction of passenger flow in urban rail transit based on big data analysis and deep learning. IEEE Access 7:142272−79

    doi: 10.1109/ACCESS.2019.2944744

    CrossRef   Google Scholar

    [24]

    Guo J, Xie Z, Qin Y, Jia L, Wang Y. 2019. Short-term abnormal passenger flow prediction based on the fusion of SVR and LSTM. IEEE Access 7:42946−55

    doi: 10.1109/ACCESS.2019.2907739

    CrossRef   Google Scholar

    [25]

    Guo Z, Zhao X, Chen Y, Wu W, Yang J. 2019. Short-term passenger flow forecast of urban rail transit based on GPR and KRR. IET Intelligent Transport Systems 13:1374−82

    doi: 10.1049/iet-its.2018.5530

    CrossRef   Google Scholar

    [26]

    Li D, Cao J, Li R, Wu L. 2020. A spatio-temporal structured LSTM model for short-term prediction of origin-destination matrix in rail transit with multisource data. IEEE Access 8:84000−19

    doi: 10.1109/ACCESS.2020.2991982

    CrossRef   Google Scholar

    [27]

    Xue F, Yao E, Huan N, Li B, Liu S. 2020. Prediction of Urban Rail Transit Ridership under Rainfall Weather Conditions. Journal of Transportation Engineering, Part A: Systems 146:4020061

    doi: 10.1061/jtepbs.0000383

    CrossRef   Google Scholar

    [28]

    Liu Q, Guo Q, Wang W, Zhang Y, Kang Q. 2021. An automatic detection algorithm of metro passenger boarding and alighting based on deep learning and optical flow. IEEE Transactions on Instrumentation and Measurement 70:5006613

    doi: 10.1109/TIM.2021.3054627

    CrossRef   Google Scholar

    [29]

    Jing Y, Hu H, Guo S, Wang X, Chen F. 2021. Short-term prediction of urban rail transit passenger flow in external passenger transport hub based on LSTM-LGB-DRS. IEEE Transactions on Intelligent Transportation Systems 22:4611−21

    doi: 10.1109/TITS.2020.3017109

    CrossRef   Google Scholar

    [30]

    Liu D, Wu Z, Sun S. 2022. Study on subway passenger flow prediction based on deep recurrent neural network. Multimedia Tools and Applications 81:18979−92

    doi: 10.1007/s11042-020-09088-x

    CrossRef   Google Scholar

    [31]

    He Y, Li L, Zhu X, Tsui KL. 2022. Multi-graph convolutional-recurrent neural network (MGC-RNN) for short-term forecasting of transit passenger flow. IEEE Transactions on Intelligent Transportation Systems 23:8155−74

    doi: 10.1109/TITS.2022.3150600

    CrossRef   Google Scholar

    [32]

    Mudashiru RB, Sabtu N, Abdullah R, Saleh A, Abustan I. 2022. A comparison of three multi-criteria decision-making models in mapping flood hazard areas of Northeast Penang, Malaysia. Natural Hazards 112:1903−39

    doi: 10.1007/s11069-022-05250-w

    CrossRef   Google Scholar

    [33]

    Wang F, Huang GH, Fan Y, Li YP. 2020. Robust Subsampling ANOVA Methods for Sensitivity Analysis of Water Resource and Environmental Models. Water Resour Manag 34:3199−17

    doi: 10.1007/s11269-020-02608-2

    CrossRef   Google Scholar

    [34]

    Yang G, Xu H. 2020. A residual BiLSTM model for named entity recognition. IEEE Access 8:227710−18

    doi: 10.1109/ACCESS.2020.3046253

    CrossRef   Google Scholar

    [35]

    Moayedi H, Osouli A, Nguyen H, Rashid ASA. 2021. A novel Harris hawks' optimization and k-fold cross-validation predicting slope stability. Engineering With Computers 37:369−79

    doi: 10.1007/s00366-019-00828-8

    CrossRef   Google Scholar

    [36]

    Vabalas A, Gowen E, Poliakoff E, Casson A. 2019. Machine learning algorithm validation with a limited sample size. PLoS One 14:e0224365

    doi: 10.1371/journal.pone.0224365

    CrossRef   Google Scholar

    [37]

    Xiong Z, Cui Y, Liu Z, Zhao Y, Hu M, et al. 2020. Evaluating explorative prediction power of machine learning algorithms for materials discovery using k-fold forward cross-validation. Computational Materials Science 171:109203

    doi: 10.1016/j.commatsci.2019.109203

    CrossRef   Google Scholar

    [38]

    Wu W, Liu R, Jin W, Ma C. 2019. Stochastic bus schedule coordination considering demand assignment and rerouting of passengers. Transportation Research Part B: Methodological 121:275−303

    doi: 10.1016/j.trb.2019.01.010

    CrossRef   Google Scholar

    [39]

    Cheng R, Ge H, Wang J. 2017. An extended continuum model accounting for the driver’s timid and aggressive attributions. Physics Letters A 381:1302−12

    doi: 10.1016/j.physleta.2017.02.018

    CrossRef   Google Scholar

    [40]

    Sun Y, Ge H, Cheng R. 2019. An extended car-following model considering driver’s memory and average speed of preceding vehicles with control strategy. Physica A: Statistical Mechanics and Its Applications 521:752−61

    doi: 10.1016/j.physa.2019.01.092

    CrossRef   Google Scholar

    [41]

    Jiang C, Ge H, Cheng R. 2019. Mean-field flow difference model with consideration of on-ramp and off-ramp. Physica A: Statistical Mechanics and Its Applications 513:465−67

    doi: 10.1016/j.physa.2018.09.026

    CrossRef   Google Scholar

    [42]

    Ma C, Dai G, Zhou J. 2022. Short-term traffic flow prediction for urban road sections based on time series analysis and LSTM_BILSTM method. IEEE Transactions on Intelligent Transportation Systems 23:5615−24

    doi: 10.1109/TITS.2021.3055258

    CrossRef   Google Scholar

    [43]

    Li L, Yang Y, Yuan Z, Chen Z. 2021. Aspatial-temporal approach for traffic status analysis and prediction based on Bi-LSTM structure. Modern Physics Letters 35:2150481

    doi: 10.1142/s0217984921504819

    CrossRef   Google Scholar

    [44]

    Yang Y, Yuan Z, Meng R. 2022. Exploring traffic crash occurrence mechanism toward cross-area freeways via an improved data mining approach. Journal of Transportation Engineering, Part A: Systems 148:04022052

    doi: 10.1061/jtepbs.0000698

    CrossRef   Google Scholar

  • Cite this article

    Qi Q, Cheng R, Ge H. 2023. Short-term inbound rail transit passenger flow prediction based on BILSTM model and influence factor analysis. Digital Transportation and Safety 2(1):12−22 doi: 10.48130/DTS-2023-0002
    Qi Q, Cheng R, Ge H. 2023. Short-term inbound rail transit passenger flow prediction based on BILSTM model and influence factor analysis. Digital Transportation and Safety 2(1):12−22 doi: 10.48130/DTS-2023-0002

Figures(13)  /  Tables(12)

Article Metrics

Article views(3503) PDF downloads(536)

Other Articles By Authors

ARTICLE   Open Access    

Short-term inbound rail transit passenger flow prediction based on BILSTM model and influence factor analysis

Digital Transportation and Safety  2 2023, 2(1): 12−22  |  Cite this article

Abstract: Accurate and real-time passenger flow prediction of rail transit is an important part of intelligent transportation systems (ITS). According to previous studies, it is found that the prediction effect of a single model is not good for datasets with large changes in passenger flow characteristics and the deep learning model with added influencing factors has better prediction accuracy. In order to provide persuasive passenger flow forecast data for ITS, a deep learning model considering the influencing factors is proposed in this paper. In view of the lack of objective analysis on the selection of influencing factors by predecessors, this paper uses analytic hierarchy processes (AHP) and one-way ANOVA analysis to scientifically select the factor of time characteristics, which classifies and gives weight to the hourly passenger flow through Duncan test. Then, combining the time weight, BILSTM based model considering the hourly travel characteristics factors is proposed. The model performance is verified through the inbound passenger flow of Ningbo rail transit. The proposed model is compared with many current mainstream deep learning algorithms, the effectiveness of the BILSTM model considering influencing factors is validated. Through comparison and analysis with various evaluation indicators and other deep learning models, the results show that the R2 score of the BILSTM model considering influencing factors reaches 0.968, and the MAE value of the BILSTM model without adding influencing factors decreases by 45.61%.

    • With rapid economic development, the Metro penetration rate is also increasing. As the population of a city increases, so does the metro passenger flow[14]. With the development of Intelligent Transportation Systems (ITS), passenger flow forecasting has become an important link in rail transit. Rail transit passenger flow prediction can provide scientific data for rail transit and other related departments, effectively allocate resources and manpower, and improve the safety, comfort and economic benefits of the entire transportation system. At the same time, it can provide effective data for relevant departments to handle emergencies, and can provide effective data guarantee for emergencies. Through extensive publicity and reporting of passenger flow forecast, passengers can scientifically and reasonably choose a public transport mode, and try to travel against peak times, which can not only improve passenger travel efficiency, but also reduce the pressure of rail transit.

      Deep learning is a new research direction in the field of machine learning, which is now widely applied in the field of transportation. Before the rise of deep learning models, many scholars mostly used statistical analysis models as research tools, such as the Autoregressive Integrated Moving Average model (ARIMA). In 1994, Bengio et al.[5,6] researched this problem in depth, and have found some fairly fundamental reasons that make training Recurrent Neural Network (RNN) very difficult. Then, in 2015, Schmidhuber[7] proposed Long Short-term Memory (LSTM) Neural Network, a special RNN, which solves the problem of long-term dependence of RNN and can learn long-term dependence information. After that, many deep learning models appeared, such as Gate Recurrent Unit (GRU) and Bi-directional LSTM (BILSTM) Neural Network models. Based on the development of deep learning, the prediction of rail transit passenger flow has gradually increased in recent years. Some scholars have optimized the model to improve the accuracy of rail transit passenger flow prediction. Yang et al.[8] proposed an improved long-term feature enhancement model based on long short-term memory (ELF-LSTM) neural networks. It makes full use of the LSTM Neural Network model (LSTMNN) in processing time series, and overcomes the limitation of long-time dependent learning caused by time delay. Zhang et al.[9] proposed a new two-step K-means clustering model, which not only captures the change trend of passenger flow, but also captures the characteristics of passenger flow, and then proposed CB-LSTM model for short-term passenger flow prediction. Yang et al.[10] established an improved Spatiotemporal Short-term Memory model (SP-LSTM) for short-term outbound passenger flow prediction of urban rail transit stations. The model predicts the outbound passenger volume based on the historical data of spatiotemporal passenger volume, origin station (OD) matrix and rail transit network operation data. At the same time, some scholars have proposed a deep learning model considering influencing factors to predict the passenger flow of rail transit and traffic flow[1116]. Hou et al.[17] considered weather factors and proposed a traffic flow prediction framework combining Stacked Auto-Encoder (SAE) and Radial Basis Function (RBF) neural networks, which can effectively capture the time correlation and periodicity of traffic flow data and the interference of weather factors. Liu et al.[18] considered the weather factors (also the wind speed) and combined them with the LSTM model to predict the short-term passenger flow of the subway. The forecast results show that the weather variables have a significant impact on the passenger flow. Some scholars[19,20] have used Graph Convolution Network models (GCN) to predict the passenger flow of orbital stations by considering the spatial topology structure of orbital stations. The results show that the GCN model has a good prediction performance on the spatial topology structure.

      Based on the summary of the main literature on passenger flow prediction of rail transit in Table 1, it can be found that few scholars have considered the influencing factors of passenger flow in the study of passenger flow prediction of rail transit. And existing literature considering the influencing factors did not analyze the influencing factors. This paper uses the analytic hierarchy process, one-way Analysis of Variance (ANOVA) and Duncan method to analyze the influencing factors. Through the study and comparison of deep learning models, the BILSTM model by adding influencing factors is used to predict the passenger flow of rail transit.

      Table 1.  Literature on passenger flow forecast of rail transit.

      AuthorYearModelConsidering factorsAnalyzing influencing factors
      Zhu et al.[21]2018AdamNoNo
      Ling et al.[22]2018DBSCANNoNo
      Zhang et al.[9]2019CB-LSTMNoNo
      Zhu et al.[23]2019DBN-SVMNoNo
      Guo et al. [24]2019SVR-LSTMNoNo
      Guo et al.[25]2019KRR and GPRNoNo
      Li et al.[26]2020STLSTMNoNo
      Zhang & Kabuka[14]2020LSTMYesNo
      Xue et al.[27]2020SVRYesNo
      Liu et al.[28]2021MPDNoNo
      Jing et al.[29]2021LGB-LSTM-DRSNoNo
      Liu et al.[30]2022DRNNNoNo
      He et al.[31]2022MGC-RNNNoNo
      This paper2022BILSTMYesYes

      The main contributions of this paper are as follows:

      (i) The influencing factors of inbound passenger flow of rail transit are analyzed, the influencing factors with the highest weight are selected scientifically, and the related weights are set through scientific methods.

      (ii) By comparing the predictions of each deep learning model, the deep learning model with the best predictive performance is selected.

      (iii) BILSTM considering the influencing factors is proposed to predict the passenger flow of rail transit. And the model parameters can be scientifically adjusted the through the K-fold cross-validation method. The actual prediction results verify that the proposed model has good prediction performance, and the prediction accuracy can be improved by adding influencing factors.

    • The data source of this paper is ID card swiping data of Ningbo Rail transit Line 1, Line 2 and Line 3. The data covers the period from September 16, 2019 to September 20, 2019, with more than one million swipes. As shown in Fig. 1, the red line shows Ningbo Rail transit Line 2, the blue line shows Ningbo Rail transit Line 1, and the green line is Line 3. Line 2 and Line 3 runs north-south as a whole, and Line 1 runs east-west. The two tracks carry most of the passenger flow in Ningbo.

      Figure 1. 

      Line map of Ningbo Rail Transit Line 1, Line 2 and Line 3.

      The original data of Ningbo rail transit ID card swiping obtained in this paper are shown in Table 2.

      Table 2.  Data sample table of rail transit ID cards from Ningbo rail transit.

      Card noTimeSwipe typeBuslineFeeStop no
      1057471b906d1eb72019-09-16 08:16:32011.9115
      c5360d16cf44a6002019-09-16 08:12:11110125
      04b0dbf510ce4d682019-09-16 08:14:33110119
      8e043c1fcd22c4142019-09-16 08:17:25110119
      987b102a48b2cb3a2019-09-16 08:16:08012.85115
      The first column of the form 'Card no' is the IC card number for rail transit, The second column 'Time' is the card swipe time; and the third column 'Swipe type' is the type of card swipe; The fourth column 'Busline' is the rail transit line; The fifth column 'Fee' is the rail transit fee; and the last column 'Stop no' is the rail transit stop number.

      The raw data from Table 2 was filtered and analyzed using SQL Server. The specific flow chart is shown in Fig. 2 .

      Figure 2. 

      Data analysis flow chart.

      Firstly, we cleaned the data through the SQL database, re-examined and verified the data, deleted duplicate information, removed invalid data (such as time-confused and missing data), corrected existing errors, and provided data consistency.

      Secondly, we classified the total rail transit IC card data, selected the rail transit IC card data, and classified the remaining IC card data according to the same method to get different daily data. After processing the data, we analyzed the valid data. Through SQL programming language, we obtained the rail transit IC card data every 5 min.

      Figure 3 shows the total passenger flow in over five working days. It can be found that the total passenger flow on these working days is basically the same, and the daily travel characteristics are also the same. Figure 4 shows the five-minute passenger flow characteristics of the five-day rail transit inbound passenger flow. It can be found that the passenger flow characteristics from September 16 to September 19 (Monday to Thursday) are basically similar, but the passenger flow situation on September 20 (Friday) is different from the previous four days. The evening peak passenger flow on Friday is greater than the morning peak passenger flow, and the overall passenger flow is also less than the previous four days. Therefore, it is necessary to forecast Friday's passenger flow by specific means.

      Figure 3. 

      Five-day total passenger flow histogram of rail transit through Ningbo.

      Figure 4. 

      Characteristic diagram of passenger flow.

      The data screened out using the SQL database is put into another step for analysis.

    • Because the influencing factors selected in this paper can not be quantified, and the weight analysis of each factor has no numerical support, the analytic hierarchy process is selected to analyze each influencing factor. Analytic Hierarchy Process (AHP)[32] is a systematic and hierarchical analysis method combining qualitative and quantitative analysis. The feature of this method is to use less quantitative information to mathematicise the thinking process of decision-making on the basis of in-depth research on the nature of complex decision-making problems, influencing factors and their internal relationships, so as to provide a simple decision-making method for complex decision-making problems with multi-objective, multi-criteria or no structural characteristics. Models and methods for making decisions on complex systems that are difficult to completely quantify.

      When determining the weights among the factors at all levels, if only the qualitative results are used, they are often not easily accepted by others. Therefore, some scholars have proposed a consistent matrix method, as follows:

      (1) Compare two or more factors rather than putting them all together;

      (2) Relative scales should be used to minimize the difficulty of comparing various factors of a different nature so as to improve the accuracy.

      A pairwise comparison matrix is a comparison of the relative importance of all factors in the layer against one of the factors (criteria or objectives) in the previous layer. The elements of the pairwise comparison matrix represent the comparison result of the first factor relative to the first factor. From Table 3, this value is given by choosing the 1−5 scale method in this paper.

      Table 3.  Impact classification table[32].

      ScaleMeaning
      1Indicates that the two factors are of equal importance
      3Indicates that one factor is obviously more important than the other
      5Indicates that one factor is strongly more important than the other
      2 and 4The median value of the above two adjacent judgments

      First, we draw all the scores as a matrix.

      $ A = \left[ {\begin{array}{*{20}{c}} {{a_{11}}}& \cdot & \cdot & \cdot &{{a_{j1}}} \\ \cdot & \cdot &{}&{}& \cdot \\ \cdot &{}& \cdot &{}& \cdot \\ \cdot &{}&{}& \cdot & \cdot \\ {{a_{i1}}}& \cdot & \cdot & \cdot &{{a_{ij}}} \end{array}} \right] $ (1)

      The unique non-zero eigenvalue of a n-order consistent matrix is n, and the maximum eigenvalue of the n-order reciprocal matrix $A({a_{ij}} > 0$, ${a_{ij}} = \frac{1}{{{a_{ji}}}}$, ${a_{ii}} = 1$) is $\lambda $. Only if $\lambda = n$, A is a consistent matrix.

      That is, because $\lambda $ is continuously dependent on ${a_{ij}}$, the more $\lambda $ is larger than $n$, the more inconsistent A is. When the eigenvector corresponding to the maximum eigenvalue is used as the weight vector to which the upper factor is affected by the comparison factor, the greater the degree of inconsistency, the greater judgment error. Thus, the inconsistency of A can be measured by the $\lambda - n$ value.

      Then, construct the consistency index, and the formula is as follows:

      $ CI = \frac{{\lambda - n}}{{n - 1}} $ (2)

      In order to measure the size of CI, the random consistency index RI is introduced. By randomly constructing a 500 paired comparison matrix A, the consistency index RI can be obtained.

      $ RI = \frac{{C{I_1} + C{I_2} + \cdot \cdot \cdot + C{I_{500}}}}{{500}} = \frac{{\dfrac{{{\lambda _1} + {\lambda _2} + \cdot \cdot \cdot + {\lambda _{500}}}}{{500}} - n}}{{n - 1}} $ (3)

      Then, the consistency ratio $CR$ is defined as follows:

      $ CR = \frac{{CI}}{{RI}} $ (4)

      Generally, when the $CR$ is less than 0.1, we think that the degree of inconsistency of A is within the allowable range and has satisfactory consistency. By consistency testing, its normalized eigenvector can be used as the weight vector W. Otherwise, to reconstruct matrix A, ${a_{ij}}$, adjusts it.

      The formula for calculating the weight vector $W$ is as follows:

      $ AW = \lambda W $ (5)
      $ W = {\left[ {\begin{array}{*{20}{c}} {{w_1}}&{{w_2}}&{ \cdot \cdot \cdot }&{{w_n}} \end{array}} \right]^T} $ (6)

      Among them, ${w_1},{w_2}, \cdot \cdot \cdot ,{w_n}$ represent the weight of each factor.

      Based on the survey, this paper summarizes eight factors that will affect the inbound passenger flow of rail transit. From Fig. 5, the eight factors are divided into three categories: human factors, environmental factors and policy factors. Human factors include commuting travel, entertainment travel and family income. Environmental factors include weather conditions, road congestion and the distance from OD (Origin and Destination) to the subway station. Policy factors include publicity policies and preferential ticket policies.

      Figure 5. 

      Influencing factors of rail transit passenger flow.

      Based on the above AHP analysis method, this paper invites three experts to rate the eight factors summarized using the expert survey method. The score results are shown in Tables 46.

      Table 4.  Expert scoring table.

      ABCDEFGH
      A13544454
      B1/3141/31/31/323
      C1/51/411/21/31/311
      D1/43211233
      E1/43311143
      F1/4331/21132
      G1/51/211/31/41/311/2
      H1/41/311/31/31/221

      Table 5.  Expert scoring table.

      ABCDEFGH
      A13432255
      B1/31411/21/233
      C1/41/411/21/31/311
      D1/31211/2133
      E1/42321154
      F1/32311153
      G1/51/311/31/51/511
      H1/51/311/31/41/311

      Table 6.  Expert scoring table.

      ABCDEFGH
      A14433243
      B1/4131/21/31/254
      C1/41/311/31/31/311
      D1/32311/2143
      E1/33321144
      F1/22311144
      G1/41/511/41/41/411
      H1/31/411/31/41/411

      On the basis of scoring by three experts, we analyze the weight of each factor according to the AHP analysis method through Matlab programming. The results are shown in Table 7.

      Table 7.  AHP analysis results table.

      Result 1Result 2Result 3Average value
      Commuting travel0.3454380.2828820.2799560.3027
      Entertainment travel0.0907720.1178530.1095880.1061
      Family income0.0466120.0502540.0479590.0483
      Weather conditions0.1500390.1155450.1369940.1342
      Road congestion0.1456800.1829690.1828560.1705
      Distance from OD to subway station0.1257660.1632700.1587600.1493
      Publicity policies0.0417810.0417230.0413900.0416
      Preferential ticket policies0.0539110.0455040.0424970.0473
      Maximum eigenvalue8.5726228.1755128.2967448.3483
      Consistency index0.0818030.0250730.0423920.0497
      Consistency ratio0.0580160.0177820.0300650.0353

      From the results in Table 6, we can see that the $CR$ values of the three results are all less than 0.1, and the average $CR$ value of the three results is 0.0353, which passes the consistency test. The average weight of commuting travel is 0.3027, the highest of the eight factors. And among the three expert ratings, commute travel has the highest weight. Therefore, this paper chooses the factor of commuting travel, for further analysis.

    • In this paper, the One-way Analysis of Variance (ANOVA)[33] method is adopted to analyze the influencing degree of time factor on rail transit passenger flow. One-way ANOVA, also known as the F test, is a statistical inference method to infer whether the population mean represented by two or more sample means is different through the analysis of data variation. Simply speaking, it is a method used to test whether different levels of the same influencing factor have an impact on the factor quantity. Then the Duncan test was used, which clearly showed the relationship between the hours.

      According to the above method, as shown in Fig. 6, we divide each hour into one group and divide it into 24 groups. Each group has the data of rail passenger flow in this hour for five days. Then the significance of the impact of each hour on passenger flow is analyzed, and each hour is weighted according to the impact degree.

      Figure 6. 

      SPSS analysis flow chart.

      From Table 8, we can see that the value of F is significantly greater than 1. The intra-group mean square error (MSE) is significantly greater than the inter-group mean square between (MSB), indicating that there is a significant difference in the hourly passenger flow between the groups, and the Sig value was 0.000, which is significantly less than 0.01, indicating that there is a significant difference in the group data, and that the time factor has a significant impact on the size of rail transit passenger flow. Therefore, the data set is analyzed by Duncan analysis.

      Table 8.  ANOVA analysis results table.

      Sum of squaresDegrees
      of freedom
      Mean squareFSig
      Within groups243071341223390252218.9199.6850.000
      Between groups50808048.00964441782.306
      Total2481521460119

      According to Duncan analysis, hourly passenger flow data were grouped using the criterion of inter-group significance P < 0.05. The grouping results are shown in Table 9.

      Table 9.  Hourly passenger flow grouping.

      GroupNumber of casesAverage value of
      passenger flow
      Intra-group significance
      00:0050.000.930
      01:0050.00
      02:0050.00
      03:0050.00
      04:0050.00
      05:0056.20
      23:00547.80
      22:0051781.401.000
      06:0052741.401.000
      21:0054496.000.688
      11:0054863.20
      20:0054889.20
      12:0054939.60
      10:0055333.60
      14:0055387.20
      13:0055514.80
      19:0055808.00
      15:0056490.600.237
      09:0057002.60
      16:0057577.40
      7:00510478.801.000
      17:00512819.200.208
      18:00513402.00
      8:00515960.601.000

      According to Duncan analysis shown in Table 9, 24 h a day are divided into eight groups according to the intra-group difference greater than 0.05 and inter-group difference less than 0.05. Therefore, the weight settings for this article follow the results of the above analysis. As shown in Fig. 7.

      Figure 7. 

      Hourly weight setting diagram.

    • In 1997, the LSTM model was first proposed by Schmidhuber[7]. LSTM is a time recursive neural network, specifically designed to solve the long-term dependency problem of general Recurrent Neural Network (RNN). BILSTM[34] is the abbreviation of Bi-directional Long Short-Term Memory, which is a combination of forward LSTM and backward LSTM. The detailed description of LSTM and BLSTM can be seen in the Appendix. The BILSTM has the advantage of solving long-range dependencies. According to the prediction applicability of the commonly used deep learning models to the nonlinear data, the BILSTM model is selected and a specific weight to the hourly inbound passenger flow is given. And then, improved BILSTM model considering the factors of commuter travel time is proposed in this paper, which is also named 'BILSTM+' model. The model structure diagram is shown in Fig. 8. This model proposed in this paper will be based on the rail transit passenger flow data every five minutes from Monday to Thursday during the week, and then predict the rail transit passenger flow data every five minutes on a Friday. Although the passenger flow characteristics from Monday to Thursday are different from a Friday, the model proposed in this paper should improve the solution of this problem and obtain better prediction accuracy.

      Figure 8. 

      Improved model structure.

      For the improved model, the whole prediction process of the model is shown in Fig. 9, which can also be described in detail in the following five steps:

      Figure 9. 

      Model training process.

      Step 1: The data sets are built by time series. The BILSTM model is a time series model, which requires the data set to have a stable time series, otherwise the time series model cannot be established. Data span from September 16, 2019 solstice to September 20, 2019, in which, travel data of a day is calculated according to every 5 minutes of passenger flow.

      Step 2: The factor weights are set. In this paper, the weight of every hour is set according to the data analysis above. The factor and hourly rail transit passenger flow data was input into the BILSTM model at the same time, and the formula is as follows:

      $ {x_t} = \left( {\begin{array}{*{20}{c}} {{D_m}}&{{W_h}} \end{array}} \right) $ (7)

      In Eq. (7), ${x_t}$ is the input, ${D_m}$ and ${W_h}$ respectively represent the rail transit passenger flow for every 5 min and time weight of each hour of the day.

      Step 3: The data sets are divided. The training set is the learning sample data set, which is mainly used to train the model. The verification set is to adjust the parameters of the classifier on the learned model, such as selecting the number of hidden units in the neural network. Validation sets are also used to determine the network structure or parameters that control the complexity of the model. The test set is mainly to test the resolution (recognition rate, etc.) of the trained model. In this paper, the original data set was divided into training set and test set, and the ratio of training set and test set was 4:1. In addition, the validation set selects 10% of the training set.

      Step 4: The data is normalized. The original data maintains its original distribution, and the data values are normalized to a range of 0 to 1. After data normalization, the original features of the data can be retained, but the size of the data value is reduced, which is of great help to model training and prediction. In this paper, the Min-Max Normalization method is selected to normalize the data. The calculation formula is as follows:

      $ x' = \frac{{x - \min \left( x \right)}}{{\max \left( x \right) - \min \left( x \right)}} $ (8)

      In Eq. (8), $x'$ is the normalized data value, $x$ is the real data value.

      Step 5: The training set was input into the model for training. According to the training results of the training set, the parameters of the model and the weight of the model input are adjusted constantly. In this paper, the K-fold cross-validation method is used[3537], and K = 10 is taken without repeated sampling. The training set is randomly divided into 10 pieces. Each time, one piece is selected as the validation set, and the remaining nine pieces are selected as the training set. The average value of the 10 test results is calculated as the estimation of the model accuracy and as the performance index of the model under the current K-fold cross-validation. The schematic diagram of K-fold cross-validation is shown in Fig. 10. If the model error is too large, all parameters are reset and the training set is re-trained. Through continuous training and debugging, the optimal parameters are finally determined.

      Figure 10. 

      K-fold cross-validation schematic diagram.

      Through the training of the training set and the adjustment of the model in the above steps, the final parameters of the model in this paper are shown in Table 10.

      Table 10.  Detailed description of parameters.

      ParameterValue
      Number of hidden layers2
      Number of each hidden layer neurons15
      Training times50
      Activation function of hidden recurrent layerstanh
      Learning rate0.02
      Backstep24

      We then examined the proposed model using data from five working days a week.

    • In this paper, Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and decision coefficient R2 score are selected as the accuracy evaluation indexes to compare these prediction algorithms[3841]. Their formulas are shown in Table 11.

      Table 11.  Predictive evaluation index formula.

      MetricFormula
      RMSE${\rm{RMSE}}=\sqrt {\dfrac{1}{n}\displaystyle\sum\limits^n_{i=1}(y_p-y)^2} $
      MAE${\rm{MAE}}=\dfrac{1}{n}\displaystyle\sum\limits^n_1\left|y_p-y\right| $
      R2 score${\rm{R2}}=1-\dfrac{\displaystyle \sum\nolimits^{n-1}_{i=0}\left(y_p-y\right)^2}{\displaystyle\sum\nolimits^{n-1}_{i=0}\left(y-\bar Y\right)} $
      $ n $ represents the number of data sample, $ y_{p} $ is the value of forecast data, $ y $ is the actual value.

      The Root Mean Square Error (RMSE) is the square root of the ratio of the square of the deviation between the predicted value and the true value to the number of observations n. In actual measurements, n is always finite, and the true value can only be replaced by the most reliable (best) value. RMSE ranges from 0 to infinity, and is equal to 0 when the predicted value is in perfect agreement with the real value. The larger the error, the greater the value.

      Mean Absolute Error (MAE) is another commonly used regression loss function. It is the mean of the absolute sum of the difference between the target value and the predicted value. It represents the mean error margin of the predicted value, regardless of the direction of the error, and ranges from 0 to infinity. When the predicted value is exactly consistent with the true value, it is equal to 0, which is the perfect model. The larger the error, the greater the value. The advantage of MAE is that it is less sensitive to outliers and more stable as a loss function.

      R2 refers to goodness of fit, and is the fitting degree of the regression line to the observed value. In deep learning, R2 is usually called R2 score. R2 score can be colloquially understood as using the mean as the error base to see if the prediction error is greater than or less than the mean base error. The value of R2 score ranges from 0 to 1. An R2 score of 1 means that the predicted and true values in the sample are exactly the same without any error. In other words, the model we built perfectly fits all the real data, and is the best model with the highest R2 score. But usually the model is not perfect, there's always an error, and when the error is small, the numerator is smaller than the denominator, the model is going to approach 1, which is still a good model. But as the error gets bigger and bigger, the R2 score is going to get further and further away from the maximum of 1. If the R2 score value is 0, it means that every predicted value of the sample is equal to the mean value, and the model constructed is exactly the same as the mean value model. If the R2 score value is less than 0, it indicates that the model constructed is inferior to the benchmark model, and the model should be rebuilt.

    • First, we will use the general BILSTM model and other deep learning models to predict the inbound passenger flow of this article under the same parameters. The results are shown in Fig. 11.

      Figure 11. 

      Prediction results of each model.

      Figure 11 is based on the passenger flow data of the rail transit inbound from Monday to Thursday, which indicates the predication of the passenger flow data of the rail transit every five minutes on a Friday. The abscissa in Fig. 11 is the data number of the test set, which is arranged in sequence according to the time series every five minutes, and the ordinate is the passenger flow. From Fig. 11, we can see that the black polyline is the real data and the green polyline is the data predicted by the BILSTM model. Compared with the other four deep learning models, the BILSTM model performs best. The GCN model performs slightly worse than BILSTM model in simple one-dimensional time series data.

      Subsequently, we compared the predicted results of the BILSTM model proposed in this paper with those of the general BILSTM model considering hourly travel characteristics factor. This is shown in Fig. 12.

      Figure 12. 

      Comparison chart of prediction results of the BILSTM model.

      It can be seen from the diagram that the blue polyline is the BILSTM model proposed in this paper, which considers hourly travel characteristics factor, and the red polyline is the basis of the BILSTM model. Compared with the general BILSTM model, the prediction accuracy of the model proposed in this paper is better than that of the general model in morning and evening rush hour and metro non-operation time.

      Then we use the evaluation index selected in this paper to evaluate each model more accurately.

      Each model is predicted five times, and the evaluation index values of each model are recorded, then the average values of the five results are compared. Each evaluation index shows that the BILSTM model proposed in this paper, which considers the peak hour factor, has the best results.

      Table 12 shows the evaluation index values for each model. Compared with the RMSE of BILSTM model and 'BILSTM+' model, the value of the 'BILSTM+' model's RMSE proposed in this paper is 30.32% lower than the original model. Compared with the MAE of BILSTM and 'BILSTM+' models, the MAE value of the proposed model is 45.61% lower than the original model. Compared with the R2 value of BILSTM and 'BILSTM+' models, the proposed model is 6.14% higher than the original model.

      Table 12.  Model evaluation index value.

      RNNGRULSTMBILSTMGCNBILSTM+
      RMSE168.382146.560113.06290.894152.30862.503
      170.353141.397119.89291.684148.96560.165
      165.936139.872110.67487.541146.58963.847
      175.364149.681107.98591.983151.86265.734
      169.872150.219115.62189.548145.96262.476
      Average169.981145.546113.44790.330149.13762.945
      MAE137.713120.51394.71676.353130.11241.621
      138.761115.56199.06177.872128.96239.592
      134.823113.87291.69174.184131.62141.932
      143.652123.64189.54977.297127.92343.291
      138.982124.03496.08275.832125.98241.095
      Average138.786119.52494.22076.308128.92041.506
      R20.7720.8270.8970.9130.7890.969
      0.7690.8150.8810.9100.8020.973
      0.7800.8110.9020.9150.7930.968
      0.7860.8320.9090.9080.8100.961
      0.7740.8330.8850.9120.7930.970
      Average0.7760.8240.8950.9120.7970.968

      Then we compared the prediction results of the BILSTM model considering only the peak hour factor (only the weight feature vector is given to the morning and evening peak at 7:00−9:00 and 16:00−18:00) and the BILSTM model considering the hourly travel factor.

      As shown in Fig. 13, it is obvious from the above figure that the prediction result of the BILSTM model considering only peak hours is better than that of the original BILSTM model, but compared with the 'BILSTM+' model proposed in this paper, the prediction accuracy of the model proposed in this paper is still better.

      Figure 13. 

      Comparison of prediction results of different models considering time factors.

      In summary, the prediction accuracy of the BILSTM model considering the factors of commuter travel in peak hours proposed in this paper is better than that of the model without factors.

    • Rail transit passenger flow forecast can provide a reasonable passenger flow basis for rail operation. When major events or activities occur, passenger flow forecast can be used as a reference, and passenger flow forecast is of great significance to the future development of rail transit. The research in this paper is the passenger flow forecast of rail transit entering the station on weekdays. In the case of different passenger flow characteristics, we introduced time characteristic factors to better and more accurately predict the inbound passenger flow. In this paper, AHP method, ANOVA analysis and Duncan test methods are used to fully analyze the influencing factors, so that the travel time characteristics are selected. A BILSTM model considering the travel time characteristics, namely, 'BILSTM+' model, is proposed. The model parameters were carefully adjusted by K-fold cross validation, and a 'BILSTM+' model with good prediction ability for the data set in this paper was obtained. The proposed model has certain reference significance for the prediction of inbound passenger flow of rail transit and the intellectualization of rail transit.

      However, there are still some parts that need to be improved. In the future, we can use multi factor ANOVA and Duncan analysis to establish a prediction model considering multi factors. At the same time, we can change the structure of the BILSTM model itself by inserting a 'weight gate' into the model which can directly weight output data instead of inputting weights in the form of feature vectors. And we will study and use better weight calculation methods to design the weight of the model, or use better combination models to predict traffic flow[4244].

      • This work is supported by the Program of Humanities and Social Science of Education Ministry of China (Grant No. 20YJA630008) and the Ningbo Natural Science Foundation of China (Grant No. 202003N4142) and the Natural Science Foundation of Zhejiang Province, China (Grant No. LY20G010004) and the K.C. Wong Magna Fund in Ningbo University, China.

      • Rongjun Cheng is the Editorial Board member of Journal Digital Transportation and Safety. He is blinded from reviewing or making decisions on the manuscript. The article was subject to the journal's standard procedures, with peer-review handled independently of this Editorial Board member and his research groups.

      • Appendix Long Short-Term and Bi-directional Long Short-Term Memory Neural Network.
      • Copyright: © 2023 by the author(s). Published by Maximum Academic Press, Fayetteville, GA. This article is an open access article distributed under Creative Commons Attribution License (CC BY 4.0), visit https://creativecommons.org/licenses/by/4.0/.
    Figure (13)  Table (12) References (44)
  • About this article
    Cite this article
    Qi Q, Cheng R, Ge H. 2023. Short-term inbound rail transit passenger flow prediction based on BILSTM model and influence factor analysis. Digital Transportation and Safety 2(1):12−22 doi: 10.48130/DTS-2023-0002
    Qi Q, Cheng R, Ge H. 2023. Short-term inbound rail transit passenger flow prediction based on BILSTM model and influence factor analysis. Digital Transportation and Safety 2(1):12−22 doi: 10.48130/DTS-2023-0002

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return