Abstract
We present an end-to-end framework for real-time melanoma detection on mole images acquired with mobile devices equipped with off-the-shelf magnifying lens. We trained our models by using transfer learning through EfficientNet convolutional neural networks by using public domain The International Skin Imaging Collaboration (ISIC)-2019 and ISIC-2020 datasets. To reduce the class imbalance issue, we integrated the standard training pipeline with schemes for effective data balance using oversampling and iterative cleaning through loss ranking. We also introduce a blurring scheme able to emulate the aberrations produced by commonly available magnifying lenses, and a novel loss function incorporating the difference in cost between false positive (melanoma misses) and false negative (benignant misses) predictions. Through preliminary experiments, we show that our framework is able to create models for real-time mobile inference with controlled trade-off between false positive rate and false negative rate. The obtained performances on ISIC-2020 dataset are the following: accuracy 96.9%, balanced accuracy 98%, ROCAUC=0.98, benign recall 97.7%, malignant recall 97.2%.
Original language | English |
---|---|
Pages (from-to) | 161-169 |
Number of pages | 9 |
Journal | Journal of Image and Graphics(United Kingdom) |
Volume | 11 |
Issue number | 2 |
DOIs | |
Publication status | Published - Jun 2023 |
Keywords
- The International Skin Imaging Collaboration (ISIC) dataset
- class imbalance
- melanoma detection
- mobile dermatoscopy
- recall loss
- refining