Model Architecture
We will implement a Sequential model which will contain the following parts:
- We will have two fully connected layers.
- We have included some BatchNormalization layers to enable stable and fast training and a Dropout layer before the final layer to avoid any possibility of overfitting.
Python3
model = keras.Sequential([ layers.Dense( 256 , activation = 'relu' , input_shape = [ 8 ]), layers.BatchNormalization(), layers.Dense( 256 , activation = 'relu' ), layers.Dropout( 0.3 ), layers.BatchNormalization(), layers.Dense( 1 , activation = 'relu' ) ]) model. compile ( loss = 'mae' , optimizer = 'adam' , metrics = [ 'mape' ] ) |
While compiling a model we provide these three essential parameters:
Python3
model.summary() |
Output:
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 256) 2304 batch_normalization (BatchN (None, 256) 1024 ormalization) dense_1 (Dense) (None, 256) 65792 dropout (Dropout) (None, 256) 0 batch_normalization_1 (Batc (None, 256) 1024 hNormalization) dense_2 (Dense) (None, 1) 257 ================================================================= Total params: 70,401 Trainable params: 69,377 Non-trainable params: 1,024 _________________________________________________________________
Now we will train our model.
Python3
history = model.fit(X_train, Y_train, epochs = 50 , verbose = 1 , batch_size = 64 , validation_data = (X_val, Y_val)) |
Output:
Epoch 46/50 53/53 [==============================] - 0s 7ms/step - loss: 1.5060 - mape: 14.9777 - val_loss: 1.5403 - val_mape: 14.0747 Epoch 47/50 53/53 [==============================] - 0s 7ms/step - loss: 1.4989 - mape: 14.6385 - val_loss: 1.5414 - val_mape: 14.2294 Epoch 48/50 53/53 [==============================] - 0s 6ms/step - loss: 1.4995 - mape: 14.8053 - val_loss: 1.4832 - val_mape: 14.1244 Epoch 49/50 53/53 [==============================] - 0s 6ms/step - loss: 1.4951 - mape: 14.5988 - val_loss: 1.4735 - val_mape: 14.2099 Epoch 50/50 53/53 [==============================] - 0s 7ms/step - loss: 1.5013 - mape: 14.7809 - val_loss: 1.5196 - val_mape: 15.0205
Let’s visualize the training and validation mae and mape with each epoch.
Python3
hist_df = pd.DataFrame(history.history) hist_df.head() |
Output:
Python3
hist_df[ 'loss' ].plot() hist_df[ 'val_loss' ].plot() plt.title( 'Loss v/s Validation Loss' ) plt.legend() plt.show() |
Output:
Python3
hist_df[ 'mape' ].plot() hist_df[ 'val_mape' ].plot() plt.title( 'MAPE v/s Validation MAPE' ) plt.legend() plt.show() |
Output:
From the above two graphs, we can certainly say that the two(mae and mape) error values have decreased simultaneously and continuously. Also, the saturation has been achieved after 15 epochs only.
How can Tensorflow be used with abalone dataset to build a sequential model?
In this article, we will learn how to build a sequential model using TensorFlow in Python to predict the age of an abalone. You may wonder what is an abalone. The answer to this question is that it is a kind of snail. Generally, the age of an Abalone is determined by the physical examination of the abalone but this is a tedious task which is why we will try to build a regressor that can predict the age of abalone using some features which are easy to determine. You can download the abalone dataset from here.
Contact Us