Transfer Learning in Python? - with practical example
Transfer Learning in Python is a technique that allows us to leverage pre-trained models to solve new tasks with less data and computational resources. This is especially useful when working with limited datasets or when training deep learning models from scratch is not feasible.
**Example 1: Fine-tuning a Pre-trained Image Classification Model**
Step 1: Load a pre-trained model (e.g., ResNet) and freeze its layers to prevent them from being updated during training.
Step 2: Replace the final classification layer with a new one that matches the number of classes in your new dataset.
Step 3: Unfreeze the top layers of the model and train the entire network on the new dataset.
**Example 2: Feature Extraction using a Pre-trained Model** Step 1: Load a pre-trained model (e.g., VGG16) without the top classification layers. Step 2: Extract features from the pre-trained model for your dataset. Step 3: Train a new classifier (e.g., SVM) on the extracted features.
By following these steps, you can effectively utilize transfer learning to improve the performance of your models, even with limited data. Transfer learning is a powerful tool in the field of deep learning that can help you achieve better results in a more efficient manner.
import tensorflow as tf from tensorflow.keras.applications import ResNet50 from tensorflow.keras.layers import Dense from tensorflow.keras.models import Model base_model = ResNet50(weights='imagenet', include_top=False) for layer in base_model.layers: layer.trainable = False x = base_model.output x = tf.keras.layers.GlobalAveragePooling2D()(x) x = Dense(1024, activation='relu')(x) predictions = Dense(num_classes, activation='softmax')(x) model = Model(inputs=base_model.input, outputs=predictions) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(train_data, train_labels, epochs=10, validation_data=(val_data, val_labels))
**Example 2: Feature Extraction using a Pre-trained Model** Step 1: Load a pre-trained model (e.g., VGG16) without the top classification layers. Step 2: Extract features from the pre-trained model for your dataset. Step 3: Train a new classifier (e.g., SVM) on the extracted features.
import numpy as np from tensorflow.keras.applications import VGG16 from tensorflow.keras.applications.vgg16 import preprocess_input from tensorflow.keras.preprocessing import image from sklearn.svm import SVC base_model = VGG16(weights='imagenet', include_top=False) feature_extractor = Model(inputs=base_model.input, outputs=base_model.get_layer('block5_pool').output) X_train_features = [] for img_path in X_train: img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) features = feature_extractor.predict(x) X_train_features.append(features) svm = SVC() svm.fit(X_train_features, y_train)
By following these steps, you can effectively utilize transfer learning to improve the performance of your models, even with limited data. Transfer learning is a powerful tool in the field of deep learning that can help you achieve better results in a more efficient manner.
Comments
Post a Comment