Model Development
We will use pre-trained weight for an Inception network which is trained on imagenet dataset. This dataset contains millions of images for around 1000 classes of images. The parameters of a model we import are already trained on millions of images and for weeks so, we do not need to train them again.
Python3
from tensorflow.keras.applications.resnet50 import ResNet50 pre_trained_model = ResNet50( input_shape = ( 224 , 224 , 3 ), weights = 'imagenet' , include_top = False ) for layer in pre_trained_model.layers: layer.trainable = False |
Output:
94765736/94765736 [==============================] - 5s 0us/step
Model Architecture
We will implement a model using the Functional API of Keras which will contain the following parts:
- The base model is the Inception model in this case.
- The Flatten layer flattens the output of the base model’s output.
- Then we will have two fully connected layers followed by the output of the flattened layer.
- We have included some BatchNormalization layers to enable stable and fast training and a Dropout layer before the final layer to avoid any possibility of overfitting.
- The final layer is the output layer which outputs soft probabilities for the three classes.
Python3
from tensorflow.keras import Model inputs = layers. Input (shape = ( 224 , 224 , 3 )) x = layers.Flatten()(inputs) x = layers.Dense( 256 ,activation = 'relu' )(x) x = layers.BatchNormalization()(x) x = layers.Dense( 256 ,activation = 'relu' )(x) x = layers.Dropout( 0.3 )(x) x = layers.BatchNormalization()(x) outputs = layers.Dense( 5 , activation = 'softmax' )(x) model = Model(inputs, outputs) |
While compiling a model we provide these three essential parameters:
Python3
model. compile ( loss = tf.keras.losses.CategoricalCrossentropy(from_logits = True ), optimizer = 'adam' , metrics = [ 'AUC' ] ) |
Now we are ready to train our model.
Python3
history = model.fit(train_ds, validation_data = val_ds, epochs = 5 , verbose = 1 ) |
Output:
Epoch 1/5 115/115 [==============================] - 8s 60ms/step - loss: 1.5825 - auc: 0.7000 - val_loss: 1.6672 - val_auc: 0.7152 Epoch 2/5 115/115 [==============================] - 7s 59ms/step - loss: 1.3806 - auc: 0.7650 - val_loss: 1.4497 - val_auc: 0.7531 Epoch 3/5 115/115 [==============================] - 8s 68ms/step - loss: 1.2619 - auc: 0.7980 - val_loss: 1.3494 - val_auc: 0.7751 Epoch 4/5 115/115 [==============================] - 7s 58ms/step - loss: 1.1828 - auc: 0.8242 - val_loss: 1.3371 - val_auc: 0.7751 Epoch 5/5 115/115 [==============================] - 7s 60ms/step - loss: 1.0954 - auc: 0.8485 - val_loss: 1.8526 - val_auc: 0.7215
In the below code, we will create a data frame from the log obtained from the training of the model.
Python3
hist_df = pd.DataFrame(history.history) hist_df.head() |
Output:
Let’s visualize the training loss and the validation loss of the data.
Python3
hist_df[ 'loss' ].plot() hist_df[ 'val_loss' ].plot() plt.title( 'Loss v/s Validation Loss' ) plt.legend() plt.show() |
Output:
Let’s visualize the training AUC and the validation AUC of the data.
Python3
hist_df[ 'auc' ].plot() hist_df[ 'val_auc' ].plot() plt.title( 'AUC v/s Validation AUC' ) plt.legend() plt.show() |
Output:
How can Tensorflow be used with the flower dataset to compile and fit the model?
In this article, we will learn how can we compile a model and fit the flower dataset to it. TO fit a dataset on a model we need to first create a data pipeline, create the model’s architecture using TensorFlow high-level API, and then before fitting the model on the data using data pipelines we need to compile the model with an appropriate loss function and optimizer and a metric to understand the whether the model is making progress epoch after epoch or not.
Contact Us