首頁 資訊 面向駕駛員的個性化健康導航

面向駕駛員的個性化健康導航

來源:泰然健康網 時間:2024年12月16日 06:19

摘要:

為了減少因駕駛員的生理和心理健康狀況變化引發(fā)的交通事故,實現(xiàn)對駕駛員健康狀態(tài)的自動監(jiān)測和實時優(yōu)化,提出以控制論的基本理論為基礎的駕駛員健康狀態(tài)閉環(huán)反饋系統(tǒng)框架.首先基于駕駛員日志建立個性化健康模型;然后結合各種傳感器實時采集的駕駛員、車輛和道路環(huán)境等多模態(tài)數(shù)據(jù),對駕駛員當前健康狀態(tài)進行估計;最后針對預設健康目標,為駕駛員提供可執(zhí)行的行為建議,實現(xiàn)對駕駛員健康狀態(tài)的導航優(yōu)化.在最關鍵的實時監(jiān)測環(huán)節(jié),提出基于注意力的卷積神經網絡(convolutional neural network,CNN)-長短期記憶網絡(long short term memory,LSTM)的多模態(tài)融合模型,實現(xiàn)對駕駛員壓力、情緒和疲勞3個方面的健康狀態(tài)估計.在私有數(shù)據(jù)集和公開數(shù)據(jù)集上分別開展的實驗驗證均獲得高于90%的檢測準確率.實驗結果表明,提出的模型和方法可以實時準確監(jiān)測駕駛員的壓力、情緒和疲勞狀態(tài),為實現(xiàn)駕駛員的個性化健康導航系統(tǒng)提供有力支撐.

Abstract:

To decrease the number of traffic accidents caused by changes in drivers' physical and mental health conditions and accomplish automatic monitoring and real-time optimization of drivers' health states, a closed-loop feedback system framework for drivers' health states was proposed based on the basic theory of cybernetics. First, a personalized health model was established based on a driver's log data. Then by combining this model with the real-time multimodal data of the driver, vehicle and road environment from various sensors, the driver's current health state was estimated. Finally given the health goal of the driver, executable behavior suggestions were provided to navigate the driver to an optimized health state. For the most critical phase of real-time monitoring, a multimodal fusion model based on attentional convolutional neural networks and long short-term memory network (CNN-LSTM) was proposed to estimate the three aspects of driver health, namely, stress, emotion, and fatigue. Experiments on both private and public datasets have achieved a detection accuracy of more than 90%, which demonstrates that the proposed model and methods can accurately monitor drivers' stress, emotion, and fatigue states in real time, thus provide a solid basis for implementing the personalized health navigation system for drivers (PHN-D).

圖  1   駕駛員個性化健康導航架構

Figure  1.   Architecture of personalized health navigation for drivers

圖  2   面向駕駛員壓力檢測的多模態(tài)融合模型

Figure  2.   Multimodal fusion model for driver stress detection

圖  3   面向駕駛員情緒檢測的多模態(tài)融合模型

Figure  3.   Multimodal fusion model for driver's emotion detection

圖  4   自我評估人體模型描述維度情感的等級

Figure  4.   SAM used to describe levels of dimensional emotion

圖  5   面向駕駛員疲勞檢測的多模態(tài)融合模型

Figure  5.   Multimodal fusion model for driver's fatigue detection

圖  6   殘差網絡的架構

Figure  6.   Structure of the residual network

圖  7   CARRS-Q高級駕駛模擬器

Figure  7.   CARRS-Q advanced driving simulator

圖  8   NTHU-DDD數(shù)據(jù)集的一些樣本幀

Figure  8.   Some sample frames of NTHU-DDD dataset

圖  9   融合模型在不同窗口大小下的平均準確率

Figure  9.   Average accuracy of the fusion model in different window sizes

圖  10   各情感維度在不同模型下的平均準確率

Figure  10.   Average accuracy of each emotional dimension in different models

表  1   不同駕駛場景中的不同壓力源

Table  1   Different stressors in different driving scenarios

場景 車輛數(shù)量 道路情況 模擬器參數(shù) 天氣 時間 城市1 0 白天 城市2 30 狹窄和彎曲 晚上 高速公路 50 彎曲 超車、變道、超速和追尾等 雨密度(0.2~1.0), 霧 晚上 CBD1 50 狹窄、彎曲和急彎 超車、變道、超速和追尾等 雨密度(0.3~0.6), 霧 白天 CBD2 60 彎曲和急彎 超車、變道、超速和追尾等 白天

表  2   壓力檢測的1D-CNN-LSTM模型參數(shù)

Table  2   Parameters of 1D-CNN-LSTM model for stress detection

網絡層 網絡層參數(shù) 卷積層 卷積核=20,核尺寸=(10,1),步數(shù)=1 池化層 池化尺寸=(2,1),步數(shù)=2 卷積層 卷積核=40,核尺寸=(5,1),步數(shù)=1 池化層 池化尺寸=(2,1),步數(shù)=2 卷積層 卷積核=80,核尺寸=(3,1),步數(shù)=1 池化層 池化尺寸=(2,1),步數(shù)=2 LSTM 隱藏層尺寸=64 LSTM 隱藏層尺寸=64

表  3   情緒檢測的1D-CNN-LSTM模型參數(shù)

Table  3   Parameters of 1D-CNN-LSTM model for emotion detection

網絡層 網絡層參數(shù) 卷積層 卷積核=20,核尺寸=(10,1),步數(shù)=1 池化層 池化尺寸=(2,1),步數(shù)=2 卷積層 卷積核=40,核尺寸=(3,1),步數(shù)=1 卷積層 卷積核=40,核尺寸=(3,1),步數(shù)=1 池化層 池化尺寸=(2,1),步數(shù)=2 卷積層 卷積核=80,核尺寸=(3,1),步數(shù)=1 池化層 池化尺寸=(2,1),步數(shù)=2 LSTM 隱藏層尺寸=64 LSTM 隱藏層尺寸=64

表  4   不同模態(tài)下的特征數(shù)據(jù)

Table  4   Feature data in different modalities

模態(tài) 特征數(shù)據(jù) 眼部數(shù)據(jù) 瞳孔直徑 凝視離散度(x和y軸) 眨眼頻率 車輛數(shù)據(jù) 方向盤角度 剎車踏板數(shù)據(jù) 油門踏板數(shù)據(jù) 環(huán)境數(shù)據(jù) 距前車的距離 一天中的時間 車道寬度和數(shù)量 天氣條件(小雨、中雨、大雨和能見度)

表  5   不同模型在不同模態(tài)下的駕駛員壓力檢測的平均準確率

Table  5   Average accuracy of different models for driver stress detection in different modalities  %

方法 環(huán)境 車輛 眼部 融合 LSTM 50.6 44.9 58.1 71.5 CNN-LSTM 51.0 73.6 85.8 90.8 CNN-LSTM-Attention 52.6 85.1 92.9 95.5

表  6   不同模態(tài)下的各情感維度的平均準確率

Table  6   Average accuracy of each emotional dimension in different modalities  %

模態(tài) 愉悅度 興奮度 支配度 環(huán)境 37.87 44.64 45.86 車輛 66.85 69.16 70.65 眼部 95.28 94.37 94.13 環(huán)境和車輛 70.96 72.87 74.22 眼部和車輛 97.29 96.79 96.92 環(huán)境和眼部 98.71 98.88 98.75 環(huán)境、車輛和眼部 98.89 98.87 98.82

表  7   融合模型在不同模態(tài)下的疲勞檢測性能

Table  7   Driver fatigue detection performance of the fusion model in different modalities  %

模態(tài) 平均F1值 平均準確率 眼部 90.78 89.20 嘴部 90.30 88.76 頭部 90.89 89.24 眼部和嘴部 94.25 93.35 眼部和頭部 93.57 92.55 嘴部和頭部 92.65 91.33 眼部、嘴部和頭部 94.66 93.77 [1]

World Health Organization. Road traffic injuries[R/OL]. [2020-12-29]. https://www.who.int/zh/news-room/fact-sheets/detail/road-traffic-injuries.

[2]

United Nations Conventions and Declarations Search System. Transforming our world: the 2030 agenda for sustainable development[R/OL]. [2020-12-29]. https://www.un.org/zh/documents/treaty/files/A-RES-70-1.shtml.

[3]

SINGH S. Critical reasons for crashes investigated in the national motor vehicle crash causation survey[R/OL]. [2020-12-29]. https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812115.

[4]

SAHAYADHAS A, SUNDARAJ K, MURUGAPPAN M. Detecting driver drowsiness based on sensors: a review[J]. Sensors, 2012, 12(12): 16937-16953. doi: 10.3390/s121216937

[5]

LEE B, CHUNG W Y. Wearable glove-type driver stress detection using a motion sensor[J]. IEEE Transactions on Intelligent Transportation Systems, 2017, 18(7): 1835-1844. doi: 10.1109/TITS.2016.2617881

[6]

KAMARUDDIN N, WAHAB A. Driver behavior analysis through speech emotion understanding[C]//2010 IEEE Intelligent Vehicles Symposium. Piscataway: IEEE, 2010: 238-243.

[7]

SOMAN K, ALEX V, SRINIVAS C. Analysis of physiological signals in response to stress using ECG and respiratory signals of automobile drivers[C]//2013 International Multi-conference on Automation, Computing, Communication, Control and Compressed (IMAC4S). Piscataway: IEEE, 2013: 574-579.

[8]

KRYGER M H, ROTH T, DEMENT W C. Principles and practice of sleep medicine[J]. American Review of Respiratory Disease, 1989, 140(4): 1172.

[9]

GAO H, YUCE A, THIRAN J P. Detecting emotional stress from facial expressions for driving safety[C]//2014 IEEE International Conference on Image Processing (ICIP). Piscataway: IEEE, 2014: 5961-5965.

[10]

FAIRCLOUGH S H, GRAHAM R. Impairment of driving performance caused by sleep deprivation or alcohol: acomparative study[J]. Human Factors, 1999, 41(1): 118-128. doi: 10.1518/001872099779577336

[11]

CHENG B, ZHANG W, LIN Y, et al. Driver drowsiness detection based on multisource information[J]. Human Factors and Ergonomics in Manufacturing & Service Industries, 2012, 22(5): 450-467.

[12]

YANG G S, LIN Y Z, BHATTACHARYA P. A driver fatigue recognition model based on information fusion and dynamic Bayesian network[J]. Information Sciences, 2010, 180(10): 1942-1954. doi: 10.1016/j.ins.2010.01.011

[13]

RASTGOO M N, NAKISA B, MAIRE F. Automatic driver stress level classification using multimodal deep learning[J/OL]. Expert Systems with Applications, 2019, 138(30): 112793. [2020-12-30]. https://sciencedirect.com/science/article/abs/pii/s0957417419304890.

[14]

KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]//26th Annual Conference on Neural Information Processing Systems. Piscataway: IEEE, 2012: 1097-1105.

[15]

HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780. doi: 10.1162/neco.1997.9.8.1735

[16]

EIMAN K, YOUNIS E M G, ANG C S. Deep learning analysis of mobile physiological, environmental and location sensor data for emotion detection[J]. Information Fusion, 2019, 49: 46-56. doi: 10.1016/j.inffus.2018.09.001

[17]

BAHDANAU D, CHO K, BENGIO Y. Neural machine translation by jointly learning to align and translate[EB/OL]. [2020-12-30]. https://arxiv.org/abs/1409.0473.

[18]

LIN Z, FENG M, SANTOS C N D, et al. A structured self-attentive sentence embedding[EB/OL]. [2020-12-30]. https://arxiv.org/abs/1703.03130.

[19]

SAINATH T N, VINYALS O, SENIOR A, et al. Convolutional, long short-term memory, fully connected deep neural networks[C]//40th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Piscataway: IEEE, 2015: 4580-4584.

[20]

BHUNIA A K, KONWER A, BHUNIA A K, et al. Script identification in natural scene image and video frames using an attention based convolutional-LSTM network[J]. Pattern Recognition, 2018, 85: 172-184.

[21] 毛喆, 嚴新平, 吳超仲, 等. 疲勞駕駛時的車速變化特征[J]. 北京工業(yè)大學學報, 2011, 37(8): 1175-1183. doi: 10.11936/bjutxb2011081175

MAO Z, YAN X P, WU C Z, et al. Characteristics of speed changes during fatigue driving[J]. Journal of Beijing University of Technology, 2011, 37(8): 1175-1183. (in Chinese) doi: 10.11936/bjutxb2011081175

[22] 馮仁科, 杜艷爽, 喬建剛. 基于駕駛員心生理反應的交叉口車輛減速特性[J]. 北京工業(yè)大學學報, 2019, 45(7): 679-684. doi: 10.11936/bjutxb2017120037

FENG R K, DU Y C, QIAO J G. Vehicle deceleration characteristics at intersections based on the driver's physiological reaction[J]. Journal of Beijing University of Technology, 2019, 45(7): 679-684. (in Chinese) doi: 10.11936/bjutxb2017120037

[23] 胡江碧, 李安, 王維利. 不同天氣狀況下駕駛員駕駛工作負荷分析[J]. 北京工業(yè)大學學報, 2011, 37(4): 529-532. doi: 10.11936/bjutxb2011040529

HU J B, LI A, WANG W L. Analysis of driver's driving workload under different weather conditions[J]. Journal of Beijing University of Technology, 2011, 37(4): 529-532. (in Chinese) doi: 10.11936/bjutxb2011040529

[24]

NAG N, JAIN R. A navigational approach to health: actionable guidance for improved quality of life[J]. Computer, 2019, 52(4): 12-20. doi: 10.1109/MC.2018.2883280

[25]

NAG N, PANDEY V, OH H, et al. Cybernetic health[EB/OL]. [2020-12-30]. https://arxiv.org/abs/1705.08514.

[26]

VERMA G K, TIWARY U S. Affect representation and recognition in 3D continuous valence-arousal-dominance space[J]. Multimedia Tools and Applications, 2017, 76(2): 2159-2183. doi: 10.1007/s11042-015-3119-y

[27]

BRADLEY M M, LANG P J. Measuring emotion: the self-assessment manikin and the semantic differential[J]. Journal of Behavior Therapy and Experimental Psychiatry, 1994, 25(1): 49-59. doi: 10.1016/0005-7916(94)90063-9

[28]

DALAL N, TRIGGS B. Histograms of oriented gradients for human detection[C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2005: 886-893.

[29]

KAZEMI V, SULLIWAN J. One millisecond face alignment with an ensemble of regression trees[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2014: 1867-1874.

[30]

ZACH C, POCK T, BISCHOF H. A duality based approach for realtime TV-L1 optical flow[C]//Proceedings of the 29th DAGM Conference on Pattern Recognition. Berlin: Springer, 2007: 214-223.

[31]

HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2016: 770-778.

[32]

Queensland University of Technology. Advanced driving simulator[EB/OL]. [2020-12-30]. https://research.qut.edu.au/carrsq/wp-content/uploads/sites/45/2019/02/Simulator.pdf.

[33]

AVSimulation. Scaner studio[EB/OL]. [2020-12-30]. https://www.avsimulation.com/scanerstudio.

[34]

FUNKE G, GREENLEE E, CARTER M, et al. Which eye tracker is right for your research? performance evaluation of several cost variant eye trackers[J]. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2016, 60(1): 1240-1244. doi: 10.1177/1541931213601289

[35]

WENG C H, LAI Y H, LAI S H. Driver drowsiness detection via a hierarchical temporal deep belief network[C]//Asian Conference on Computer Vision (ACCV). Berlin: Springer, 2016: 117-133.

[36]

HE K, FAN H, WU Y, et al. (2020). Momentum contrast for unsupervised visual representation learning[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2020: 9726-9735.

相關知識

駕駛員心理健康與壓力管理.pptx
駕駛員職業(yè)健康有害因素及防控措施
基于車載裝備的駕駛員健康監(jiān)測方法及監(jiān)測設備與流程
調整不良情緒 市公交公司開展公交車駕駛員心理健康輔導
健康出行:拒絕疲勞駕駛
減壓室 給公交駕駛員減壓
駕駛員怎么做才能節(jié)能和減排
讓自動駕駛汽車行駛在法治軌道上
健康出行小貼士,行程安排與安全駕駛的重要性
環(huán)保出行,自動駕駛同行:共創(chuàng)綠色、健康的出行新世界

網址: 面向駕駛員的個性化健康導航 http://www.u1s5d6.cn/newsview561904.html

推薦資訊