Validation and Interpretation of a Multimodal Drowsiness Detection System Using Explainable Machine Learning
Article
Article Title | Validation and Interpretation of a Multimodal Drowsiness Detection System Using Explainable Machine Learning |
---|---|
ERA Journal ID | 5039 |
Article Category | Article |
Authors | Hasan, Md Mahmudul, Watling, Christopher N. and Larue, Grégoire S. |
Journal Title | Computer Methods and Programs in Biomedicine |
Journal Citation | 243 |
Article Number | 107925 |
Number of Pages | 16 |
Year | 2024 |
Publisher | Elsevier |
Place of Publication | Ireland |
ISSN | 0169-2607 |
1872-7565 | |
Digital Object Identifier (DOI) | https://doi.org/10.1016/j.cmpb.2023.107925 |
Web Address (URL) | https://www.sciencedirect.com/science/article/pii/S0169260723005916 |
Abstract | Background and Objective: Drowsiness behind the wheel is a major road safety issue with efforts focused on developing drowsy driving detection systems. However, most drowsy driving detection studies using physiological signals have focused on developing a 'black box' machine learning classifier, with much less focus on 'robustness' and 'explainability'—two crucial properties of a trustworthy machine learning model. Therefore, this study has focused on using multiple validation techniques to evaluate the overall performance of such a system using multiple supervised machine learning-based classifier and then unbox the black box model using explainable machine learning. Methods: Driving was simulated via a 30-minute psychomotor vigilance task while the participants reported their level of subjective sleepiness with their physiological signals being recorded. Six different techniques, comprising subject-dependent and independent techniques were applied for model validation and robustness testing with three supervised machine learning classifiers (K-nearest neighbours, Support vector machines and Random Forest), and two explainable methods, namely SHapley Additive exPlanation (SHAP) analysis and partial dependency analysis (PDA) were leveraged for model interpretation. Results: The study identified the leave one participant out, a subject-independent validation technique to be most useful, with the best sensitivity of 70.3%, specificity of 82.2%, and an accuracy of 80.1% using the random forest classifier in addressing the autocorrelation issue due to inter-individual differences in physiological signals. Moreover, the explainable results suggest most important physiological features for drowsiness detection, with a clear cut-off in the decision boundary. Conclusions: The implication of the study will ensure a rigorous validation for robustness testing and an explainable machine learning approach to developing a trustworthy drowsiness detection system and enhancing road safety. The explainable machine learning-based results show promise in real-life deployment of the physiological-signal based in-vehicle trustworthy drowsiness detection system, with higher reliability and explainability, along with a lower system cost. |
Keywords | features, physiological signals, validation, interpretability, SHAP analysis, partial dependency analysis |
Article Publishing Charge (APC) Funding | Other |
ANZSRC Field of Research 2020 | 461199. Machine learning not elsewhere classified |
520206. Psychophysiology | |
520402. Decision making | |
Byline Affiliations | University of New South Wales |
Queensland University of Technology | |
School of Psychology and Wellbeing | |
Centre for Health Research | |
University of the Sunshine Coast |
https://research.usq.edu.au/item/z2vq9/validation-and-interpretation-of-a-multimodal-drowsiness-detection-system-using-explainable-machine-learning
Download files
Restricted files
Submitted Version
66
total views36
total downloads0
views this month0
downloads this month