Interpretation of Artificial Intelligence Models in Healthcare: A Pictorial Guide for Clinicians
Article
Ardakani, Ali Abbasian, Airom, Omid, Khorshidi, Hamid, Bureau, Nathalie J., Salvi, Massimo, Molinari, Filippo and Acharya, U. Rajendra. 2024. "Interpretation of Artificial Intelligence Models in Healthcare: A Pictorial Guide for Clinicians." Journal of Ultrasound in Medicine. https://doi.org/10.1002/jum.16524
Article Title | Interpretation of Artificial Intelligence Models in Healthcare: A Pictorial Guide for Clinicians |
---|---|
ERA Journal ID | 16537 |
Article Category | Article |
Authors | Ardakani, Ali Abbasian, Airom, Omid, Khorshidi, Hamid, Bureau, Nathalie J., Salvi, Massimo, Molinari, Filippo and Acharya, U. Rajendra |
Journal Title | Journal of Ultrasound in Medicine |
Number of Pages | 30 |
Year | 2024 |
Publisher | John Wiley & Sons |
Place of Publication | United States |
ISSN | 0278-4297 |
1550-9613 | |
Digital Object Identifier (DOI) | https://doi.org/10.1002/jum.16524 |
Web Address (URL) | https://onlinelibrary.wiley.com/doi/epdf/10.1002/jum.16524 |
Abstract | Artificial intelligence (AI) models can play a more effective role in managing patients with the explosion of digital health records available in the healthcare industry. Machine-learning (ML) and deep-learning (DL) techniques are two methods used to develop predictive models that serve to improve the clinical processes in the healthcare industry. These models are also implemented in medical imaging machines to empower them with an intelligent decision system to aid physicians in their decisions and increase the efficiency of their routine clinical practices. The physicians who are going to work with these machines need to have an insight into what happens in the background of the implemented models and how they work. More importantly, they need to be able to interpret their predictions, assess their performance, and compare them to find the one with the best performance and fewer errors. This review aims to provide an accessible overview of key evaluation metrics for physicians without AI expertise. In this review, we developed four real-world diagnostic AI models (two ML and two DL models) for breast cancer diagnosis using ultrasound images. Then, 23 of the most commonly used evaluation metrics were reviewed uncomplicatedly for physicians. Finally, all metrics were calculated and used practically to interpret and evaluate the outputs of the models. Accessible explanations and practical applications empower physicians to effectively interpret, evaluate, and optimize AI models to ensure safety and efficacy when integrated into clinical practice. |
Keywords | clinical translation; deep learning models; xplainable artificialintelligence; machine learning models |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 420311. Health systems |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Byline Affiliations | Shahid Beheshti University of Medical Sciences, Iran |
University of Padua, Italy | |
University of Montreal, Canada | |
Polytechnic University of Turin, Italy | |
School of Mathematics, Physics and Computing | |
Centre for Health Research |
Permalink -
https://research.usq.edu.au/item/z9972/interpretation-of-artificial-intelligence-models-in-healthcare-a-pictorial-guide-for-clinicians
3
total views0
total downloads3
views this month0
downloads this month