-
The experimental equipment used includes eye trackers, human factor equipment, portable computers, cameras, etc. An eye tracker was used to collect driver's eye movement information. Human factor equipment was used to obtain bioelectricity characteristic information of drivers. The camera was used to record the entire experimental process.
In the virtual driving experiment, a driving simulation system consisting of a Logitech G29 driving kit, six degrees of freedom platform, UC-win/Road software, and three 55-in triple displays are required. The virtual driving experimental environment is shown in Fig. 1.
An SUV vehicle is used in the driving experiment with an inverter provided for electrical needs. The experimental environment is shown in Fig. 2.
Experimental participants
-
There were a total of 50 participants in the experiment, with a male-to-female ratio of 8:2. Participants in the experiment were required to have a maximum of 600 degrees of myopia. The distribution of age and driving experience information for the drivers are shown in Table 1.
Table 1. Basic information of experimental participants.
Range No. of participants Age (year) 25−35 26 36−45 18 46−55 6 Driving experience (year) 1−5 11 5−10 18 10−22 21 The virtual driving experiment was conducted in a laboratory environment with a driving simulator. The safety of the experiment is guaranteed. Therefore, a fast highway was chosen as the experimental scene in the virtual driving experiment. A bidirectional four-lane expressway with a length of 40 km and a width of 15 m were generated in the simulation software. There are no added curves throughout the road, all of which are straight sections. Trees and grass have been added to both sides of the road according to a fixed proportion. The driver performs driving tasks back and forth in this scene until the end of the experiment. The reason why tunnel scenes were not selected in the virtual driving experiment is that the tunnel environment displayed on the screen in the laboratory is more likely to cause visual fatigue for drivers compared to the highway environment, which can cause interference with the experimental results.
The vehicle driving experiment poses certain risks due to the need to drive vehicles on actual roads, especially on fast-moving highways. Once an accident occurs, it is easy to cause serious consequences and difficult to ensure the safety of the experiment. Therefore, the urban road sections through tunnels were selected as the experimental scene in the vehicle driving experiment Qingdao Binhai Highway (Qingdao, China) was selected for the experiment. Qingdao University of Science and Technology Laoshan Campus was chosen as the start point. Shandong University Qingdao Campus was chosen as the end point. China Ocean University Laoshan Campus was chosen as the intermediate point. The total length of the road was 36 km, with a speed limit of 80 km/h. There was a monotonous environment on both sides of the road. The Yangkou Tunnel, which passes through the road, has a total length of 7.76 km and six lanes in both directions. It is divided into two tunnels, with a left line length of 3.875 km and a right line length of 3.888 km. The single tunnel was 14.8 m wide and 8 m high.
Experimental process
-
The virtual driving experiment process was as follows:
Participants in the experiment had sufficient sleep to eliminate interference caused by fatigue. The experiment started at 9:00 am on weekdays and ended at 11:00 am. In addition to the experimental participants, three colleagues from the laboratory participated in the experimental work. Before the experiment, one colleague needed to debug the experimental equipment, and two colleagues assisted the participant in wearing the experimental equipment. The entire experiment was recorded through video recording. During the experiment, the driver needed to drive at a speed of 120 km/h and in a single lane for 20 min. An experimental assistant observed the driver's eye movements and electrocardiogram characteristics. When the driver's gaze was focused on the front and the electrocardiogram signal remained stable, the assistant recorded this time period. Especially when the driver may experience an alert state, an experimental assistant proactively asked the driver if he (she) had just experienced road hypnosis and recorded it at this point. After the single driving process, the experimental assistant assisted the participants in removing the equipment. The experimental participant had a 10 min break. An experimental assistant immediately asked the participant if he (or she) had experienced any abnormal driving behaviors such as fatigue, distraction, etc. during the recent driving process and recorded them. The driver could recall whether or not he or she has road hypnosis states through the experimental video and the corresponding eye movement and bioelectricity. It can be recorded by the experimental assistant. After this process is completed, the driving experiment is restarted. The experimental process remained consistent, and the experimental duration was extended to 40 min. After the driving process was completed, an experimental assistant organized the equipment and the experiment was ended.
The vehicle driving experiment process is as follows:
Due to traffic congestion and other issues during peak hours in the morning, the vehicle driving experiment was chosen to start at 10:00 am and ended at 12:00 noon. In addition to the experimental participants, three colleagues from the laboratory participated in the experimental work. The most important thing in vehicle driving experiments is to ensure driving safety. Any experiment should be conducted under the premise of ensuring safety. Before the experiment, one colleague needed to debug the experimental equipment, and two colleagues assisted the experimental personnel in wearing the experimental equipment. The start section of the experiment was from the Laoshan Campus of Qingdao University of Science and Technology to the Laoshan Campus of Ocean University of China. The road conditions in this section are relatively complex, and the driver maintained natural driving during this section to be familiar with the experimental equipment and vehicle driving environment. After arriving at the Laoshan Campus of Ocean University of China, the driver took a break and continued the driving process. The driver was required to avoid overtaking or lane changing behavior while driving without affecting safe driving. Driver’s attention may be distracted by frequent overtaking or lane-changing behaviors. An experimental assistant observed the driver's eye movements and electrocardiogram characteristics. The assistant recorded the time period during which the driver’s gaze was focused ahead and the electrocardiogram signal remained stable. When the driver may have experienced an alert state, the assistant proactively asked the driver if he (or she) had just experienced road hypnosis and recorded it. After arriving at the Qingdao campus of Shandong University, the experimental assistant assisted the participant in removing the equipment. The participant had a rest time of 20 min. An experimental assistant immediately asked the participant if he (she) had experienced any abnormal driving behaviors such as fatigue, distraction, etc. during the recent driving process and recorded them. The driver could recall whether or not he or she had road hypnosis states through the experimental video and the corresponding eye movement and bioelectricity and recorded it. After this process was completed, the driving experiment was continued from Shandong University Qingdao Campus to Qingdao University of Science and Technology Laoshan Campus. The experimental process remained consistent. After arriving at the destination, an experimental assistant organized the equipment and ended the experiment.
-
After organizing and classifying the experimental data, 50 sets of vehicle driving experimental data and 50 sets of virtual driving experimental data were obtained. The experimental data with typical road hypnosis characterization phenomena was manually screened by colleagues with relevant research experience for the construction of a road hypnosis dataset. Ten minutes of experimental data from each set of experimental data was selected. Because road hypnosis is a state that repeatedly appears and disappears within a certain period of time, the 10 min of data selected was not an entirely continuous driving period. Similarly, data with no abnormal driving status for 10 min was selected as the normal driving dataset. After the preliminary screening was completed, the expert scoring method was used to score the two types of datasets obtained. The effectiveness of the dataset was evaluated through video playback. The final score was confirmed. During the vehicle driving experiment, nine participants experience fatigue due to physical exhaustion, uncomfortable sitting posture, and other reasons. Finally, 35 sets of valid video data were obtained as the vehicle driving experimental dataset, including 25 sets of normal driving and 10 sets of road hypnosis data. In the virtual driving experiment, 43 sets of valid video data were selected, including 27 sets of normal driving and 16 sets of road hypnosis data.
Methodology
-
The ensemble learning method used in this study is the Stacking method, which improves the overall performance of the model by combining multiple different base learners. Firstly, the LSTM and KNN-base learners are trained separately to obtain their predictions on the input data. Then, these prediction results are used as features and input into the SVM model, which is trained to obtain the final ensemble model. By leveraging the strengths of both the LSTM and KNN base learners, the SVM meta-learner can more accurately identify the state of road hypnosis.
LSTM (Long Short-Term Memory) is a special type of Recurrent Neural Network (RNN) suitable for processing and predicting time series data. LSTM addresses the long-term dependency problem in traditional RNNs by introducing three gates: the forget gate, the input gate, and the output gate. In this study, LSTM is used to process and analyze the eye movement characteristics of drivers to capture their dynamic changes during driving. The preprocessed eye movement data is input into the LSTM model for training, thus obtaining the LSTM base learner for road hypnosis recognition.
KNN (k-Nearest Neighbors) is an instance-based non-parametric classification method. It predicts the class of a sample by calculating the distance between the samples to be classified and each sample in the training set, and then selecting the class of the k-nearest samples. In this study, the KNN algorithm is used to process and analyze the bioelectric characteristic data of drivers, such as electrocardiogram (ECG) and electromyogram (EMG) signals. By preprocessing the high-order spectral features and combining them with Principal Component Analysis (PCA), the KNN base learner for road hypnosis recognition is constructed in this study.
SVM (Support Vector Machine) is a supervised learning model primarily used for classification and regression analysis. SVM classifies samples of different classes by finding an optimal decision boundary in a high-dimensional space. In this study, SVM is used as the meta-learner to perform ensemble learning on the predictions of the LSTM and KNN base learners. By using the outputs of the LSTM and KNN-base learners as inputs, SVM can better integrate the information from both, thereby improving the accuracy of road hypnosis recognition.
In our previous study[8], a road hypnotic identification model is established by using eye movement data. Principal Component Analysis (PCA) is selected to preprocess eye movement feature data. Eye tracker acquisition parameters and their meanings are shown in Table 2. Descriptive statistical analysis of eye movement data in normal driving state and road hypnosis are shown in Tables 3 & 4.
Table 2. Eye tracker acquisition parameters and their meanings.
Type Name Explanation Pupil information Pupil Diameter Left (mm) Left eye pupil diameter Pupil Diameter Right (mm) Right eye pupil diameter IPD (mm) Interpupillary distance Gaze Point X (px) Original fixation x-coordinate in pixels Gaze Point Y (px) Original fixation y-coordinate in pixels Gaze Point Right X (px) X-coordinate of the original gaze point of the right eye Gaze Point Right Y (px) Y-coordinate of the original gaze point of the right eye Gaze Point Left X (px) X-coordinate of the original gaze point of the left eye Gaze Point Left Y (px) Y-coordinate of the original gaze point of the left eye Fixation Point X (px) The x-coordinate of the gaze point in pixels Fixation Point Y (px) The y-coordinate of the gaze point in pixels Fixation Duration (ms) Gaze duration Saccade information Saccade Single Velocity (px/ms) Average saccade velocity per frame Saccade Velocity Peak (px/ms) Velocity peaks during saccades Blink information Blink Index Blink Index Blink Duration (ms) Duration of blink Blink Eye Blink label: 0 for double wink; 1 for left wink; 2 for right wink Table 3. Descriptive statistical analysis of eye movement data in normal driving state.
Name N Minimum value Maximum value Average value Standard deviation Variance Saccade Velocity Average (px/ms) 68292 0.500 1.306 0.712 0.168 0.028 Pupil Diameter Left (mm) 68292 3.041 4.275 3.751 0.211 0.045 Pupil Diameter Right (mm) 68292 −1.534 4.232 3.677 0.413 0.170 IPD (mm) 68292 66.345 67.618 67.004 0.161 0.026 Gaze Point X (px) 68292 330.646 842.180 632.310 62.291 3880.145 Gaze Point Y (px) 68292 176.772 380.967 295.534 26.781 717.240 Gaze Point Left X (px) 68292 362.979 866.104 636.314 64.265 4129.937 Gaze Point Left Y (px) 68292 180.498 633.565 390.844 80.357 6457.305 Gaze Point Right X (px) 68292 298.318 829.718 627.969 66.181 4379.910 Gaze Point Right Y (px) 68292 −157.189 512.873 204.823 106.666 11377.556 Gaze Origin Left X (mm) 68292 −5.497 −4.155 −4.778 0.175 0.031 Gaze Origin Left Y (mm) 68292 −7.894 −5.095 −5.804 0.361 0.130 Gaze Origin Left Z (mm) 68292 −35.869 −33.335 −34.054 0.283 0.080 Gaze Origin Right X (mm) 68292 4.728 5.333 5.104 0.119 0.014 Gaze Origin Right Y (mm) 68292 −4.630 −2.530 −3.332 0.301 0.091 Gaze Origin Right Z (mm) 68292 −32.927 −31.464 −32.051 0.293 0.086 Fixation Duration (ms) 68292 66.000 90966.000 23206.851 22645.982 512840520.244 Fixation Point X (px) 68292 402.268 851.052 625.487 54.279 2946.158 Fixation Point Y (px) 68292 125.196 343.138 299.244 14.902 222.064 N is the sample size. Table 4. Descriptive statistical analysis of eye movement data in road hypnosis.
Name N Minimum value Maximum value Average value Standard deviation Variance Saccade Velocity Average (px/ms) 42775 0.00001 0.500 0.174 0.146 0.021 Pupil Diameter Left (mm) 42775 2.791 4.301 3.629 0.231 0.053 Pupil Diameter Right (mm) 42775 −1.307 4.323 3.636 0.277 0.077 IPD (mm) 42775 66.149 67.536 67.017 0.125 0.016 Gaze Point X (px) 42775 320.779 855.683 603.739 56.414 3182.571 Gaze Point Y (px) 42775 160.197 393.530 302.320 20.799 432.589 Gaze Point Left X (px) 42775 355.348 856.234 616.640 59.496 3539.833 Gaze Point Left Y (px) 42775 165.291 655.587 386.817 71.875 5165.952 Gaze Point Right X (px) 42775 236.304 854.082 590.594 57.633 3321.547 Gaze Point Right Y (px) 42775 −186.220 513.097 218.809 84.647 7165.129 Gaze Origin Left X (mm) 42775 −5.450 −4.140 −4.871 0.170 0.029 Gaze Origin Left Y (mm) 42775 −7.917 −4.972 −5.682 0.347 0.120 Gaze Origin Left Z (mm) 42775 −35.888 −33.288 −34.064 0.265 0.070 Gaze Origin Right X (mm) 42775 4.548 5.374 5.127 0.100 0.010 Gaze Origin Right Y (mm) 42775 −4.590 −2.518 −3.304 0.234 0.055 Gaze Origin Right Z (mm) 42775 −32.966 −31.306 −31.926 0.248 0.062 Fixation Duration (ms) 42775 167.000 90966 40057.15 30047.406 902846577 Fixation Point X (px) 42775 402.268 851.052 608.402 39.981 1598.472 Fixation Point Y (px) 42775 59.117 408.848 302.290 12.971 168.255 N is the sample size. Finally, six and five principal components are obtained after processing virtual experimental data and vehicle driving experimental data, respectively. Then, the preprocessed data is trained using the LSTM network to establish road hypnosis identification models based on the vehicle driving experiment and the virtual driving experiment. The accuracy rates of the two models are 93.27% and 97.01%, respectively. The principal component analysis method is selected to process the eye movement feature data. After processing the final virtual experiment data and vehicle driving experiment data, 6 and 5 principal components are also obtained, respectively. The principal component comprehensive models of the virtual experiment and vehicle experiment are shown in Eqns (1) and (2), respectively.
$ C = \sum\limits_{n = 1}^6 {\dfrac{{{\lambda _n}}}{{\displaystyle\sum\limits_{i = 1}^6 {{\lambda _i}} }}} \times {C_n} $ (1) $ C = \sum\limits_{n = 1}^5 {\dfrac{{{\lambda _n}}}{{\displaystyle\sum\limits_{i = 1}^5 {{\lambda _i}} }}} \times {C_n} $ (2) where, C is the principal component. Cn is the nth principal component coefficient. λn is the n principal component eigenvalue. λi is the i principal component eigenvalue.
The LSTM network consists of three gates, which are forget gate, input gate, and output gate. Their calculation formulas are shown in Eqns (3), (4), and (5), respectively.
$ {f_t} = \sigma ({W_f} \cdot [{h_{t - 1}},{x_t}] + {b_f}) $ (3) $ \begin{gathered} {\tilde C _t} = \tanh ({W_C} \cdot [{h_{t - 1}},{x_t}] + {b_C}) \\ {i_t} = \sigma ({W_i} \cdot [{h_{t - 1}},{x_t}] + {b_i}) \\ \end{gathered} $ (4) $ {C_t} = {f_t} * {C_{t - 1}} + {i_t} * {\tilde C _t} $ (5) where, ft is forget gate.
is activation function. Wf is the weight matrix of forget gate. ht-1 is the hidden status of time step t−1. xt is the input at the time step t. bf is the bias term of forget gate.$\sigma $ is the state of the candidate cell at time step t. h is hidden state. WC is the correlation weight matrix of candidate cell state${\tilde C\ _t}$ . it is the activation state of input gate. Wi is the weight matrix of input gate. bi is the bias term of input gate. Ct is the cell state at time step t. ft is the activation state of forget gate.${\tilde C _t}$ The LSTM network is used to train the data preprocessed by principal component analysis. Road hypnosis identification models are established with virtual experimental datasets and vehicle driving experimental datasets, respectively.
In another study, bioelectricity data is used to establish a road hypnosis identification model. When preprocessing bioelectricity data, the method of high-order spectral features is chosen to process the ECG and EMG data obtained from the experiment. The processed ECG and EMG data can be further fused through principal component analysis. Bioelectricity characteristic parameters and their meanings are shown in Table 5. A descriptive statistical analysis of bioelectricity characteristic parameters are shown in Table 6.
Table 5. Bioelectricity characteristic parameters and their meanings.
Name Explanation ECG Signal (mV) ECG signal amplitude in millivolts Heart Rate (bpm) Heart rate in beats per minute R-R Interval (ms) Interval between consecutive R-waves SDNN (ms) Standard deviation of NN intervals Table 6. Descriptive statistical analysis of bioelectricity characteristic parameters.
State Minimum value Maximum value Average
valueStandard
deviationVariance Skewness Kurtosis Value Value Value Standard error Value Value Value Standard error Value Standard error Road hypnosis −1827.865 2595.328 −0.017 1.026 332.738 110714.259 2.961 0.008 19.341 0.015 Normal state −1672.206 2353.899 0.092 2.018 310.955 96692.828 3.071 0.016 20.910 0.032 Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), and KNN algorithm are used to train and classify the preprocessed data. Road hypnosis identification models based on virtual driving experiments and vehicle driving experiments are trained, respectively. The accuracy of the two models trained using the KNN algorithm is the highest, with 99.84% and 97.06%. The high-order spectral feature method is used to preprocess the ECG and EMG data collected in the experiment. Among them, the calculation formula of the first order moment to the third order moment is similar, as follows:
$ {m_1} = E\left[ X \right] $ (6) $ {m_2}\left( {{\tau _1}} \right) = E\left[ {X\left( K \right) \cdot X\left( {K + {\tau _1}} \right)} \right] = {c_2}\left( {{\tau _1}} \right) $ (7) $ {m_3}\left( {{\tau _1},{\tau _2}} \right) = E\left[ {X\left( K \right) \cdot X\left( {K + {\tau _1}} \right) \cdot X\left( {K + {\tau _2}} \right)} \right] = {c_3}\left( {{\tau _1},{\tau _2}} \right) $ (8) The calculation method of the fourth-order moment is different from the first three orders. The calculation formula is as follows:
$ {m_4}\left( {{\tau _1},{\tau _2},{\tau _3}} \right) = E\left[ {X\left( K \right) \cdot X\left( {K + {\tau _1}} \right) \cdot X\left( {K + {\tau _2}} \right) \cdot X\left( {K + {\tau _3}} \right)} \right] $ (9) $ \begin{split} m_4\left(\tau_1,\tau_2,\tau_3\right)=\; & c_4\left(\tau_1,\tau_2,\tau_3\right)+c_2\left(\tau_1\right)\cdot c_2\left(\tau_3-\tau_2\right)+ \\ &c_2\left(\tau_2\right)\cdot c_2\left(\tau_3-\tau_1\right)+c_2\left(\tau_3\right)\cdot c_2\left(\tau_2-\tau_1\right) \end{split} $ (10) In this study, features are collected from the bispectrum of the signal and obtained the sum of the logarithmic amplitudes of the bispectrum (S1). The sum of the logarithmic amplitudes of the diagonal elements of the bispectrum (S2). The first-order spectral moment of the diagonal element amplitude of the bispectrum (S3), as follows:
$ {S_1} = \sum\limits_\Omega {\log \left( {\left| {B\left( {{f_1},{f_2}} \right)} \right|} \right)} $ (11) $ {S_2} = \sum\limits_\Omega {\log \left( {\left| {B\left( {{f_k},{f_k}} \right)} \right|} \right)} $ (12) $ {S_3} = \sum\limits_{k = 1}^N {k\log \left( {\left| {B\left( {{f_k},{f_k}} \right)} \right|} \right)} $ (13) where, B(f1, f2) is the Fourier transform of the third-order cumulate at frequencies f1 and f2. k is the counting variable
The preprocessed ECG and EMG data can be fused with principal component analysis method. The principal components are calculated by determining the eigenvectors and eigenvalues of the covariance matrix. The covariance matrix is used to measure the degree of change between dimensions relative to the mean. The covariance of two random variables is the trend of their fusion, calculated as follows:
$ {\rm{cov}} \left( {X:Y} \right) = \sum\limits_{i = 1}^N {\dfrac{{\left( {{x_i} - \overline x } \right)\left( {{y_i} - \overline y } \right)}}{N}} $ (14) where, xi is the i-th observation of the variable X.
is the i-th observation of variable Y.${y_i}$ is the arithmetic average of all observations of the variable X.$\overline x $ is the arithmetic average of all observations of the variable Y. N is the total number of observations.$\overline y $ The preprocessed data is trained with the KNN algorithm. Road hypnosis identification models can be established with virtual experimental datasets and vehicle driving experimental datasets, respectively.
Road hypnosis identification model
-
The purpose of this study is to identify road hypnosis by integrating the eye movement characteristics and bioelectricity characteristics of drivers. Therefore, the Stacking method in ensemble learning is chosen to establish a road hypnosis identification model. The Stacking algorithm is a heterogeneous integration method based on multiple different base learners. For road hypnosis, first-level learners based on eye movement feature data and bioelectricity feature data can be constructed respectively. A second-level learner can be constructed based on the prediction results of the two first-level learners, to achieve the complementary effect of the two first-level learners. It can effectively reduce the error caused by factors such as overfitting. Among them, the first-level learner is called the base learner, and the second-level learner is called the meta learner. The algorithm calculation process is shown in Fig. 3.
In this study, the LSTM model and the KNN model as two basic learners are selected to learn the eye movement feature data and bioelectricity feature data collected in the experiments. The SVM model is used as the meta-learner. Whether it’s vehicle driving experimental data or virtual experimental data, the same processing method is used. When training the base learner, the experimental data is divided into a training set and a testing set using k-fold cross-validation to train the base learner. The value of k is 5, and 80% of the data is selected as the training set each time. The remaining 20% of the data is used as the test set. The eye movement data is trained with the LSTM algorithm to obtain the LSTM-based learners. The ECG and EMG data are trained with the KNN algorithm to obtain KNN-based learners. The SVM algorithm is used to train data predicted with the LSTM-based learners and KNN-based learners. A road hypnosis identification model is established. The algorithm principle of the SVM model is as follows:
(1) The appropriate kernel function is K(x, z) and penalty function is C > 0. The convex quadratic programming problem can be constructed and solved with Eqn (15):
$ f(x) = \mathop {\min }\limits_\alpha \dfrac{1}{2}\sum\limits_{i = 1}^N {\sum\limits_{j = 1}^N {{\alpha _i}} } {\alpha _j}{y_i}{y_j}K({x_i},{x_j}) - \sum\limits_{i = 1}^N {{\alpha _i}} $ (15) where, αi and αj are the Lagrange multiplier of SVM. yi and yj are the sample label. K(xi, xj)
is the kernel function.$ K({x_i},{x_j}) $ is the bias term.$ \sum\limits_{i = 1}^N {{\alpha _i}} $ (2) The component
is α*, that satisfies the condition$\alpha _j^*$ . The classification decision function can be calculated as follows:$0 < \alpha _j^* < C$ $ f(x) = sign\left(\sum\limits_1^N {\alpha _i^*} {y_i}K(x,{x_i}) + {b^*}\right) $ (16) $ {b^*} = {y_j} - \sum\limits_{i = 1}^N {\alpha _i^*} {y_i}K({x_i} \cdot {x_j}) $ (17) where, αi is the Lagrange multiplier of SVM. yi and yj are the sample label. K(x, xi) is the kernel function. b* is the bias term.
(3) When the kernel function is the commonly used Gaussian kernel function
, the classification decision function is:$K(x,z) = \exp ( - \dfrac{{{{\left\| {x - z} \right\|}^2}}}{{2{\sigma ^2}}})$ $ f(x) = sign\left(\sum\limits_{i = 1}^N {\alpha _i^*} {y_i}\exp \left( - \dfrac{{{{\left\| {x - z} \right\|}^2}}}{{2{\sigma ^2}}}\right) + {b^*}\right) $ (18) where,
is the coefficient multiplied by the radial basis kernel function$\alpha _i^*$ . b* is the bias term.$\exp ( - \dfrac{{{{\left\| {x - z} \right\|}^2}}}{{2{\sigma ^2}}})$ The classification results are shown in Fig. 4.
To evaluate the performance of the fused model, Root Mean Square Error (RMSE) is used to evaluate the performance of the trained model. RMSE represents the sample standard deviation of the difference between predicted and observed values, which can indicate the degree of dispersion of the sample. The calculation is as follows:
$ RMSE = \sqrt {\dfrac{1}{n}\sum\limits_{i = 1}^n {{{\left| {{{\hat y}_i} - {y_i}} \right|}^2}} } $ (19) where, RMSE is root-mean-square error. yi is true value.
is predicted value. n is the number of samples.$ {\hat y_i} $ -
The data that support the findings of this study are available on request from the corresponding author, Wang X.
-
About this article
Cite this article
Chen L, Wang J, Wang X, Wang B, Zhang H, et al. 2024. A road hypnosis identification method for drivers based on fusion of biological characteristics. Digital Transportation and Safety 3(3): 144−154 doi: 10.48130/dts-0024-0013
A road hypnosis identification method for drivers based on fusion of biological characteristics
- Received: 18 April 2024
- Revised: 01 August 2024
- Accepted: 03 August 2024
- Published online: 30 September 2024
Abstract: Risky driving behaviors, such as driving fatigue and distraction have recently received more attention. There is also much research about driving styles, driving emotions, older drivers, drugged driving, DUI (driving under the influence), and DWI (driving while intoxicated). Road hypnosis is a special behavior significantly impacting traffic safety. However, there is little research on this phenomenon. Road hypnosis, as an unconscious state, is can frequently occur while driving, particularly in highly predictable, monotonous, and familiar environments. In this paper, vehicle and virtual driving experiments are designed to collect the biological characteristics including eye movement and bioelectric parameters. Typical scenes in tunnels and highways are used as experimental scenes. LSTM (Long Short-Term Memory) and KNN (K-Nearest Neighbor) are employed as the base learners, while SVM (Support Vector Machine) serves as the meta-learner. A road hypnosis identification model is proposed based on ensemble learning, which integrates bioelectric and eye movement characteristics. The proposed model has good identification performance, as seen from the experimental results. In this study, alternative methods and technical support are provided for real-time and accurate identification of road hypnosis.
-
Key words:
- Road hypnosis /
- State identification /
- Active safety /
- Drivers /
- Intelligent vehicles