A Deep Learning Framework for Removing Bias from Single-Photon Emission Computerized Tomography
Paper
Paper/Presentation Title | A Deep Learning Framework for Removing Bias from Single-Photon Emission Computerized Tomography |
---|---|
Presentation Type | Paper |
Authors | Ying, Josh Jia-Ching, Yang, Wan-Ju, Zhang, Ji, Ni, Yu-Ching, Lin, Chia-Yu, Tseng, Fan-Pin and Tao, Xiaohui |
Journal or Proceedings Title | Proceedings of 18th International Conference on Advanced Data Mining and Applications (ADMA 2022) |
Journal Citation | 13725, pp. 275-289 |
Number of Pages | 15 |
Year | 2022 |
Place of Publication | Switzerland |
ISBN | 9783031220630 |
9783031220647 | |
Digital Object Identifier (DOI) | https://doi.org/10.1007/978-3-031-22064-7_21 |
Web Address (URL) of Paper | https://link.springer.com/chapter/10.1007/978-3-031-22064-7_21 |
Web Address (URL) of Conference Proceedings | https://link.springer.com/book/10.1007/978-3-031-22064-7 |
Conference/Event | 18th International Conference on Advanced Data Mining and Applications (ADMA 2022) |
Event Details | 18th International Conference on Advanced Data Mining and Applications (ADMA 2022) Parent International Conference on Advanced Data Mining and Applications Delivery In person Event Date 28 to end of 30 Nov 2022 Event Location Brisbane, Australia |
Abstract | After being photographed by medical equipment, noise in the unprocessed medical image is removed through manual processing and correction to create a proper medical image. However, manually processing medical images takes a long time. Suppose the current medical images are used with artificial intelligence to predict the type and severity of the disease. In that case, patients can be prioritized based on the predicted results, reducing the probability of patients most in need of care not getting timely treatment and increasing the efficiency of visits. Most experts use deep learning image feature segmentation to learn all the features in the image. However, some features in the image are not needed. These unwanted image features will affect subsequent training, which we call “biased information.” In the process of training image features through artificial intelligence, biased information may overpower the more important image features in the target learning task, resulting in poor training results. Therefore, instead of learning all the features in the image, we should only learn what we need. This paper uses the architecture of biomedical image segmentation convolutional neural network combined with principal component analysis to extract the main feature weights in the image data and determine whether the feature is something we want to learn. If not, the feature is deleted, which prevents it from affecting subsequent training. The feature vector we need is associated with the first principal component. After learning the results, we can verify its accuracy through the image classification model. It is found that after biased information is removed, the classification effect is reduced, and the accuracy of disease classification has increased significantly from less than 35% to more than 60%. |
Keywords | Deep learning; Bias correction ; Image segmentation |
ANZSRC Field of Research 2020 | 460299. Artificial intelligence not elsewhere classified |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Series | Lecture Notes in Computer Science |
Byline Affiliations | National Chung Hsing University, Taiwan |
School of Mathematics, Physics and Computing | |
Institute of Nuclear Energy Research, Taiwan | |
National Tsing-Hua University, Taiwan |
https://research.usq.edu.au/item/z5903/a-deep-learning-framework-for-removing-bias-from-single-photon-emission-computerized-tomography
114
total views0
total downloads9
views this month0
downloads this month