Desktop version

Home arrow Computer Science

  • Increase font
  • Decrease font

<<   CONTENTS   >>

ApnaDermato: Human Skin Disease Finder Using Machine Learning



Department of Computer Engineering, Vivekanand Education Society's Institute of Technology, Chembur, Mumbai, Maharashtra 400074, India

'Corresponding author. E-mail: This email address is being protected from spam bots, you need Javascript enabled to view it


Anyone may get skin problems at any place due to numerous reasons. Types of problems can be cracked skin, diy skin, extra growth, coloring due to infection, etc. Skin problem is faced by more than 20 million patients causing more than 10 million deaths on the globe. In this chapter, image of the affected area of skin is taken and name of the disease is predicted using a classification model which is used to detect the type of skin disease. Transfer learning model which is a research problem in machine learning is used to observe if the transfer model is useful for correct classification of skin diseases which needs urgent treatment especially for infants. In this work, MobileNet architecture is used which was trained over 1000 images. In the end, the network was tested with 500 images and achieves an accuracy of 85%.


Image processing and machine learning have received significant implications in various fields. Nowadays, skin cancer is a significant public health problem, which approaches with the same techniques used in the dermatology field. Dermatology is an essential field of medical science, where skin diseases are considered a common illness, affecting all walks of life.

Medical imaging centers are vital, where dermatology has an endless list of conditions under treatment. The baseline method is to detect early skin disease and keep track of any change in the affected area, such as swelling, color change, or growth. Generally, doctors examine patients directly; this is still the first resource used by specialists. However, real-tune skin problem detection can make life easier.


Many of the skin disease cases lead to death just because the patients did not get the treatment done in the early stages of the disease. Patient’s ignorance, unawareness, or unable to get the appropriate diagnosis tools can be considered the reasons for not curing the disease at initial stages. With time, growth of illness conquers the body’s vitals and leads to death. There is a need to develop a system that allows users to diagnose skin diseases and get it cured quickly. This chapter presents a brief review of human skin disease detection using machine learning.


Earlier, people used to travel long distances to meet doctors to get a remedy for any severe ailment. Still, people prefer to meet doctors face to face. With the advancement in technology and the busy schedule of people, it becomes difficult to take out time to visit doctors before the health issue becomes severe. If anything happens to infants, then parents visit the nearest clinic. But sometimes it is not possible to visit a clinic, so parents need to reach the doctor through some other mode of communication, that is, the Internet. Reachability of patient as early as possible is also a significant issue, which leads to many deaths. Online expert advice from various doctors can play a vital role to save one’s life in critical situations. Therefore, there is a need to bridge the gap between doctor and a patient by providing a single platform on which patients can send their queries to doctors online and can get a diagnosis from expert doctors, which admittedly can be a lifesaver for a patient.

In this chapter, our focus is to extract various data from the input (infected skin image), and based on which, diagnosis will be provided. ApnaDennato is the name termed for this mobile application, where “apna” means “our” and “dermato” means “skin disease helper.” This application is made to enable communication between people residing in remote areas and doctors working in cities. A person suffering from skin disease can click a picture of the infected area and post it on the app for diagnosis. On the doctor's side, the machine learning model will detect the disease and suggest it to the doctor. The doctor can overwrite the automated diagnosis in case the model’s prediction is inaccurate.

The baseline steps in image processing are the following.

  • Acquisition: This is the first step in digital image processing. It consists of preprocessing such as scaling down resolution so that processing gets faster.
  • Image enhancement: This step is used to get details that are obscured or to highlight certain features in an image. Example: changing brightness and contrast, etc.
  • Image restoration: This step is used to improve the appearance of the image. Mathematical or probabilistic models dr ive the process of restoration.
  • Color image processing: This step includes color modeling and image processing of digital images.
  • Wavelets and multiresolution processing: Wavelets are useful for representing images in various resolutions.
  • Image compression: This step compresses the image size for processing.
  • Morphological processing: This step is used to extract shape-related information from the image.
  • Segmentation procedure: This step divides the input image into subparts which the algorithm computes separately.
  • Representation and description: Transforming raw data into a form suitable for subsequent computer processing and choosing the illustration is a part of the solution.
  • Object detection: This step assigns labels to input images that help to predict the output (Figure 5.1).

The MobileNet model is based on the “Inception V3 classifier,” which consists of two parts: one part is for data extraction, and another part is for classification of the object. Convolutional neural network (CNN) is the base of the model that we have used. The pretrained model that we have used is based on the above phases. The CNN algorithm belongs to the categoiy of deep learning It takes in an image as input, assigns weights and biases to the objects in the image, and can differentiate among the detected objects.

In our system, we have used a technique called transfer learning. Transfer learning is a technique in which we can improve learning in a new task through the transfer of knowledge from a related task that has already been learned. The weights and biases of the existing system are reused to train the new set of systems to give the desired results. The model that we have used is the pretrained Inception v3 classifier from Google. This model consists of two parts:

  • • Feature extraction part with a CNN.
  • • Classification part (with fully connected, softmax layers).
Image processing steps

FIGURE 5.1 Image processing steps.

The pretrained Inception-V3 model achieves good accuracy while recognizing general objects with 1000 classes, such as “Zebra,” “Dalmatian,” “Dishwasher,” etc. This model extracts general features from input images in the first phase and classifies them based on those features in the second phase. In transfer learning, when we build a new model to classify the original dataset, we reuse the feature extraction part of the model and retrain the classification part with our dataset. Since we do not have to train the feature extraction part (which happens to be the most complex part of the model), we can train the model with less computational resources and training time, which is the main advantage of using transfer learning.

During the process of transfer learning, three questions must be answered:

What to transfer?

This is the most crucial step in the whole process of transfer learning. When trying to answer this question, we tiy to identify which portion of knowledge has similarities with the source and target.

When to transfer?

There can be situations where transferring knowledge just for the sake of it may make matters worse than improving anything (negative transfer). We should aim at using transfer learning to improve target task performance and not degrade the performance. We need to be careful about when to transfer and when not to transfer.

How to transfer?

In this final step, we can proceed toward identifying ways of transferring knowledge across domains. This involves changes to existing algorithms and different techniques, as per requirement. In the final layer of the neural network, we can set the number of nodes accordingly based on out application domain.

<<   CONTENTS   >>

Related topics