Abstract
Medical experts are often skeptical of data-driven models due to the lack of their explainability. Several experimental studies commence with wide-ranging unsupervised learning and precisely with clustering to obtain existing patterns without prior knowledge of newly acquired data. Explainable Artificial Intelligence (XAI) increases the trust between virtual assistance by Machine Learning models and medical experts. Awareness about how data is analyzed and what factors are considered during the decision-making process can be confidently answered with the help of XAI. In this paper, we introduce an improved hybrid classical-quantum clustering (improved qk-means algorithm) approach with the additional explainable method. The proposed model uses learning strategies such as the Local Interpretable Model-agnostic Explanations (LIME) method and improved quantum k-means (qk-means) algorithm to diagnose abnormal activities based on breast cancer images and Knee Magnetic Resonance Imaging (MRI) datasets to generate an explanation of the predictions. Compared with existing algorithms, the clustering accuracy of the generated clusters increases trust in the model-generated explanations. In practice, the experiment uses 600 breast cancer (BC) patient records with seven features and 510 knee MRI records with five features. The result shows that the improved hybrid approach outperforms the classical one in the BC and Knee MRI datasets.
Original language | English |
---|---|
Article number | 110413 |
Journal | Knowledge-Based Systems |
Volume | 267 |
DOIs | |
Publication status | Published - 12 May 2023 |
Keywords
- Explainable AI
- LIME
- Quantum clustering
- Quantum computing
- Quantum machine learning
- qk-means algorithm