Article

Deep-Learning Algorithm Shows Positive Results in Classifying Dermatologic Conditions Through Images

Author(s):

This new tool, using an automated analysis of different parts of the body, could help clinicians to address previously unmet needs for treatment in conditions such as psoriasis.

Sebastian Sitaru, MD

Credit: LinkedIn

Sebastian Sitaru, MD

Credit: LinkedIn

A newly-developed algorithm using convolutional neural networks has 89% accuracy in identifying body parts using dermatological images, according to recent findings, showing its potential to improve clinical care and research.1

The development of this tool was conducted to help improve, among other things, diagnostic accuracy given that algorithms for organ or body part recognition had been largely limited to sources such as X-ray images and computed tomography (CT).2,3

The research was authored by Sebastian Sitaru, MD, from the Department of Dermatology and Allergy at the Technical University of Munich’s School of Medicine in Munich. Sitaru and the other investigators noted that the existing literature demonstrated an unmet need to pursue automatic body part identification by using clinical dermatological images.

“Therefore, in the present study we developed a deep-learning algorithm which aims to classify dermatological images from a clinical database to different body parts to improve the diagnosis, treatment, and research of dermatological conditions,” Sitaru and colleagues wrote.

Background and Findings

The investigators utilized real-world clinical photographs of dermatology patients that had been taken at the Technical University of Munich’s Department of Dermatology and Allergy. During photo sessions in the University, patients sometimes had several sessions depending on their disease and the frequency of their visits.

The research team collected images from the University’s database from the period between 2006 and 2019. The team randomly selected 8,338 images from the database and then manually assigned them to 1 out of a possible 12 groups which were categorized with a body part through the use of a web frontend.

Additionally, the images that could not be correctly attributed to a single body part, due to reasons including higher zoom levels, were labeled “not classifiable” and then disregarded for the testing. Following the investigators’ sorting, 6,219 labeled images were left.

The research team then grouped the common diagnoses into 41 categories, with rare diagnoses (<2% images) and images without a diagnosis being placed into an “other” category.

The Xception network architecture, without pre-trained weights, was utilized by the investigators in the study, demonstrating superior performance when compared to other available networks in the keras framework. The data was split into training and test datasets, and the network was trained using backpropagation and the Adam algorithm.

Data augmentation techniques including rotation, random zooming, and horizontal flipping, were applied during the training. The network's performance was then examined through a balanced accuracy calculation.

The algorithm was later applied by the research team to a clinical database of about 200,000 images, with diagnoses grouped and body parts assigned using a coordinate grid and interpolation algorithm.

Regarding results, the algorithm was found to have achieved a mean accuracy of 89%, surpassing the performance of previous segmentation algorithms. The distribution of affected body parts in psoriasis, eczema, and non-melanoma skin cancer was also observed, revealing that non-melanoma cancer predominantly affected the face and torso, while psoriasis and eczema commonly involve the torso, legs, and hands.

The research team noted discrepancies between the photographed body areas and typical affected regions described in the literature, suggesting the torso may be an additional but less recognized site of predilection for these issues.

“In conclusion, we have presented the to date first algorithm to accurately label the body part of clinical dermatological images of both common and rare diagnoses,” they wrote. “Applications of this algorithm include support of clinical practice by facilitating diagnosis and treatment planning.”

References

  1. Sitaru, S, Oueslati, T, Schielein, MC, et al. Automatic body part identification in real-world clinical dermatological images using machine learning. JDDG: Journal der Deutschen Dermatologischen Gesellschaft. 2023; 1-7. https://doi.org/10.1111/ddg.15113.
  2. Dicken V, Lindow B, Bornemann L, et al. Rapid image recognition of body parts scanned in computed tomography datasets. Int J Comput Assist Radiol Surg. 2010; 5: 527-535.
  3. Zhou X. Automatic Segmentation of Multiple Organs on 3D CT Images by Using Deep Learning Approaches. Adv Exp Med Biol. 2020; 1213: 135-147.
Related Videos
Brigit Vogel, MD: Exploring Geographical Disparities in PAD Care Across US| Image Credit: LinkedIn
Eric Lawitz, MD | Credit: UT Health San Antonio
| Image Credit: X
Ahmad Masri, MD, MS | Credit: Oregon Health and Science University
Ahmad Masri, MD, MS | Credit: Oregon Health and Science University
Stephen Nicholls, MBBS, PhD | Credit: Monash University
Marianna Fontana, MD, PhD: Nex-Z Shows Promise in ATTR-CM Phase 1 Trial | Image Credit: Radcliffe Cardiology
Zerlasiran Achieves Durable Lp(a) Reductions at 60 Weeks, with Stephen J. Nicholls, MD, PhD | Image Credit: Monash University
Gaith Noaiseh, MD: Nipocalimab Improves Disease Measures, Reduces Autoantibodies in Sjogren’s
4 experts are featured in this series.
© 2024 MJH Life Sciences

All rights reserved.