Thursday, January 23, 2020

Transfer learning




TRANSFER LEARNING Using a pre trained network on images not in training set is known as transfer learning



DIFFERENT ARCHITECTURES WE CAN USE

THESE ARE THE TOP 1 AND TOP 2 ERROR

  • The numbers such as VCG-11 where 11 are the no of layers.
  • When we are using this we need to do the trade-off between accuracy and speed.
  • They are massively deep.They have 100's  of hidden  layer


MODELS



USING DENSENET

  • It has 121 different layers


LOADING AND ARCHITECTURE
  • After loading we can see the architecture.We have features and classifiers
  • The classifier has 1024 input features and 1000 output features
  • Imagenet dataset has 1000 different classes.The no of output should be 1000 for these classes.
  • The features would not be used and need to be freezed.

FREEZING THE FEATURES
  • We need feature part static, to this we would freeze our feature parameter as shown below
  • This would increase processing speed as it would not keep track of features

BUILD CLASSIFIER
We would build our classifier as shown below

ATTACH TO OUR MODEL


MOVE TO GPU
  • We can move all the computing to a GPU by specifying model.cuda . This would move all the parameters and model to the GPU


  • To move the tensors to the GPU, we need to use image.cuda
  • To specify cpu we need to change it to image.cpu, model.cpu


COMPARISON BETWEEN CPU AND GPU
  • We would get a speed of over a 100 times

No comments:

Post a Comment