Transfer Learning for Image Classification — (7) Fine-tune the Transfer Learning Model

Chris Kuo/Dr. Dataman
4 min readAug 7, 2022
Image by author

~ “Practice makes perfect.” ~

In the previous chapter “Chapter 6: Build your transfer learning model”, we built a model for our special use case. How can we improve the model performance? To get a better model performance, we can either add more data or work on the model itself (or both). More and better-annotated images should be a good way to improve the model. On the other hand, we can fine-tune the model. Pre-trained models can let users open up more layers for training. A more flexible model can improve the model's predictability. So this chapter we will fine-tune the model.

(A) Open Up More Trainable Layers

Let me print the layers of VGG-16. All the 138,357,544 parameters are trainable. You can find the Python notebook via this link.

Figure (A): VGG-16 (image by author)

--

--