An introduction to Transfer Learning

Transfer Learning is a Deep Learning technique where a model developed for a task is reused as the initial point for a model on another domain. Instead of train the model from scratch, it is better to reuse the pre-trained model which are trained on a large dataset. 

Transfer Learning widely used to solve computer vision and Natural Language Processing related tasks. Transfer Learning methods improve the performance of a neural network. Deep Neural Networks trained on very large scale dataset like ImageNet and COCO have used for transfer learning.

Pre-trained model

A Pre-trained model is a saved model that was previously trained on a very large dataset.  Due to the limitation of computational power, we can not train a very large neural network on large dataset. These models can be used for prediction, feature extraction, and fine-tuning.

1) Feature Extraction

Use the pre-trained learned model to extract the meaningful features from new samples.  You simply add a new classifier, which will be trained from scratch, on top of the pre-trained model so that you can repurpose the feature maps learned previously for our dataset. You do not need to retrain the entire model. The base convolutional network already contains features that are generically useful for classifying pictures. However, the final, classification part of the pre-trained model is specific to the original classification task, and subsequently specific to the set of classes on which the model was trained.

2) Fine Tuning

Fine-tuning is the process of unfreezing the top few layers of a frozen model base, and jointly train both newly added layers as well as the last layers of the base model. This allows us to finetune the higher-order feature representations in the base model in order to make them more relevant for the specific task.

.     .     .

Approach to use Pre-Trained Models

1)  Select a Base Model: A pre-trained base model is selected from available models.

2)  Reuse Model : The pre-trained model can be used as the starting point for a model on the second task. This may involve using all or parts of the model, depending on the modelling technique used.

3)  Tune Model : Refine the pre-trained model on input-output pair data available for a task.

.     .     .

There is the various pre-trained model available for computer-vision and Natural Language Processing task. These pre-trained models are open-sourced. So, we can easily use these model using deep learning libraries such as Keras and  PyTorch.

Computer Vision

  • Xception
  • VGG16
  • VGG19
  • ResNet50
  • ResNet101
  • ResNet152
  • ResNetV2
  • InceptionV3
  • InceptionResNetV2
  • MobileNet
  • MobileNetV2
  • DenseNet
  • NASNet

 

Natural Language Processing

  • Word2Vec
  • GloVe
  • fastText

Leave a Reply

Your email address will not be published. Required fields are marked *

Computer Vision Tutorials

Prepare COCO dataset of a specific subset of classes for semantic image segmentation

YOLOV4: Train a yolov4-tiny on the custom dataset using google colab.

Video classification techniques with Deep Learning

Keras ImageDataGenerator with flow_from_dataframe()

Keras ImageDataGenerator with flow_from_directory()

Keras ImageDataGenerator with flow()

Keras ImageDataGenerator

Keras fit, fit_generator, train_on_batch

Keras Modeling | Sequential vs Functional API

Save and Load Keras Model

Convolutional Neural Networks (CNN) with Keras in Python

Transfer Learning for Image Recognition Using Pre-Trained Models

Keras ImageDataGenerator and Data Augmentation

Introduction to Computer Vision