Weighted Gradients
Let’s consider a neural network where we want to apply different learning rates to different layers. Using multiple tapes, we can achieve this efficiently.
Using the following code snippet, we can compute the gradients for different parts of the model independently.
- A simple neural network is defined, and random input data are generated for demonstration.
- Multiple Tapes for Each Layer:
- Two
GradientTape
instances,tape1
andtape2
, are created withpersistent=True
to ensure that we can compute gradients multiple times without them being cleared after a single call togradient()
. - Within the nested
with
blocks, we perform a forward pass through the model to generate predictions.
- Two
- Compute Gradients for Each Layer:
- We compute gradients for each layer of the model separately using the respective tapes (
tape2
). gradients_layer1
contains the gradients of the loss with respect to the trainable variables of the first layer (model.layers[0]
), andgradients_layer2
contains the gradients of the loss with respect to the trainable variables of the second layer (model.layers[1]
).
- We compute gradients for each layer of the model separately using the respective tapes (
- Different learning rates are applied to each layer and weights are updated of each layer using the gradients computed from the respective tapes and assigned learning rates. This is achieved by subtracting the product of learning rate and gradient from the current weights of each layer. The updated weights of each layer are then printed.
Python3
import tensorflow as tf # Define a sample neural network model = tf.keras.Sequential([ tf.keras.layers.Dense( 64 , activation = 'relu' ), tf.keras.layers.Dense( 10 ) ]) # Dummy input data inputs = tf.random.normal(( 1 , 10 )) # Create tapes for each layer with tf.GradientTape(persistent = True ) as tape1: with tf.GradientTape(persistent = True ) as tape2: # Forward pass predictions = model(inputs) # Compute gradients for each layer gradients_layer1 = tape2.gradient(predictions, model.layers[ 0 ].trainable_variables) gradients_layer2 = tape2.gradient(predictions, model.layers[ 1 ].trainable_variables) # Apply different learning rates to each layer learning_rate_layer1 = 0.01 learning_rate_layer2 = 0.001 # Update weights model.layers[ 0 ].kernel.assign_sub(learning_rate_layer1 * gradients_layer1[ 0 ]) model.layers[ 1 ].kernel.assign_sub(learning_rate_layer2 * gradients_layer2[ 0 ]) # Display updated weights print ("Updated Weights - Layer 1 :") print (model.layers[ 0 ].get_weights()[ 0 ]) print ("\nUpdated Weights - Layer 2 :") print (model.layers[ 1 ].get_weights()[ 0 ]) |
Output:
Updated Weights - Layer 1:
[[-1.30826473e-01 -2.32910410e-01 1.53757617e-01 -2.33601332e-01
2.37545267e-01 1.29789859e-01 -1.12673879e-01 -4.85953987e-02
2.53589600e-01 1.18229769e-01 3.76837850e-02 1.36155441e-01
4.61646914e-02 -1.23881459e-01 7.15705100e-04 1.30734965e-01
2.74057567e-01 -3.36100459e-02 1.17648832e-01 2.65050530e-02
........
Updated Weights - Layer 2: [[ 0.12994759 0.09406354 -0.02325075 0.04526017 -0.04975254 0.2231702 0.21599863 0.13290443 -0.1242546 -0.17571561] [-0.10918297 0.2301283 0.02327682 -0.07420231 0.0579354 0.04462339 0.02882947 -0.19031678 -0.2628794 0.24104424] [ 0.04480169 -0.25517935 -0.21863683 0.1296206 0.20039697 0.23810901 0.28418207 -0.00311767 -0.2530919 0.01515845] [ 0.23954001 -0.08794038 0.06706679 -0.05967966 0.03434923 0.20604822 -0.18618475 0.1561557 0.07995269 0.266633 ]
The output contains the updated weights of layer 1 and layer 2.
Multiple tapes in TensorFlow
TensorFlow, a powerful open-source machine learning framework, introduces the concept of multiple tapes to facilitate the computation of gradients for complex models. In this data science project, we will explore the significance of multiple tapes and demonstrate their application in real-world scenarios.
Contact Us