Transfer Learning is a popular tool in the field of Deep Learning. It is used to reuse a previously created model for a new problem. Thanks to that you can train neural networks with little data, which significantly saves time and memory resources required. This type of algorithm is also available in the ML.NET library. I want to show you its use on the example of Image Classification using a pre-trained TensorFlow model.
How is Transfer Learning defined?
Generally, in this approach, the knowledge of the already trained model is applied to a different but related problem. It can be explained with a simple example. Suppose you have trained a classifier that recognizes if there is a car in the photo. You can now use the knowledge acquired by the model during the learning process to recognize whether the photo was taken in the city or in the countryside.
Why we use Transfer Learning for Image Classification?
You can look at it from two sides. With huge datasets, the process of training and testing the model would take a long time. However, if we have a small number of images, the chances of training the classifier properly are negligible. In both cases, transfer learning is useful. Because you use a previously learned model, which reduces the time of the entire process, and at the same time increases the chances of final success even if the data set is small.
Problem studied and the dataset used
Our task will be to train the image classification model to detect a cell infected with the malaria parasite. The model consists of photos of cells that have been assigned labels in previous training: ‘parasitized’ and ‘uninfected’.
I used the malaria dataset from TensorFlow for the experiment. The analyzed data has 2 classes and contains a total of 27,558 cell images with equal cases of parasite-infected and uninfected cells from the split-cell thin blood smear slide images. Sample photos of a parasitized and uninfected cell are as follows:
Training process in the Image Classification API
We can distinguish two stages during the learning process: bottleneck phase and training phase. In the bottleneck phase, a set of training images is loaded. The pixel values are used in this case as input or features for the frozen layers for the pre-trained model. We call these layers as frozen because no training will occur on that and operations are pass-through. The training phase is the re-training of the last layer of the model based on the outputs from the bottleneck phase. This process is performed iteratively. Each iteration returns loss and accuracy. The phase continues until the loss is minimized and accuracy is maximized.
After creating a console application project and downloading the following libraries from NuGet Packages:
you can proceed to implementation and model creation. In the beginning, you should create classes corresponding to the initially loaded data, input data and output data. Created classes are shown in the listing:
Then you can go on to load the images and write code that will be responsible for converting images into bytes which you will use as your dataset:
The method for loading images in .png format from the dictionary can be implemented as follows:
The next step is to create test, training and validation sets. You can use for that DataOperationCatalog class from ML.NET library. In the case of a training and test sets, I propose to adopt the popular division, i.e. 70% is a training set and 30% is a test set.
Now is the time for creating a training pipeline and set up the options for the image classification. Here, among other things, you specify the model architecture to be used in the case of image classification training using transfer learning.
After completing the previous steps, you can now go to training and testing the model:
Results and summary
After starting the application, the bottleneck phase should begin. In the console, you should see something similar:
After some time, depending on how large the dataset you use, it should perform the training phase:
As you can see, the accuracy was over than 90%. The classification for the sample images was also successful. Generally, results seem very good, but of course, deeper analysis and the use of other metrics are needed to confirm its value.