Model Prediction with Metrics
Python3
def get_test_image_and_annotation_arrays(): ''' Unpacks the test dataset and returns the input images and segmentation masks ''' ds = test_ds.unbatch() ds = ds.batch(info.splits[ 'test' ].num_examples) images = [] y_true_segments = [] for image, annotation in ds.take( 1 ): y_true_segments = annotation.numpy() images = image.numpy() y_true_segments = y_true_segments[:( info.splits[ 'test' ].num_examples - (info.splits[ 'test' ] .num_examples % BATCH_SIZE))] images = images[:(info.splits[ 'test' ].num_examples - (info.splits[ 'test' ].num_examples % BATCH_SIZE))] return images, y_true_segments y_true_images, y_true_segments = get_test_image_and_annotation_arrays() integer_slider = 2574 img = np.reshape(y_true_images[integer_slider], ( 1 , width, height, 3 )) y_pred_mask = model.predict(img) y_pred_mask = create_mask(y_pred_mask) y_pred_mask.shape def display_prediction(display_list, display_string): plt.figure(figsize = ( 15 , 15 )) title = [ 'Input Image' , 'True Mask' , 'Predicted Mask' ] for i in range ( len (display_list)): plt.subplot( 1 , len (display_list), i + 1 ) plt.title(title[i]) plt.xticks([]) plt.yticks([]) if i = = 1 : plt.xlabel(display_string, fontsize = 12 ) plt.imshow(keras.preprocessing.image.array_to_img(display_list[i])) plt.show() iou, dice_score = compute_metrics( y_true_segments[integer_slider], y_pred_mask.numpy()) display_list = [y_true_images[integer_slider], y_true_segments[integer_slider], y_pred_mask] display_string_list = [ "{}: IOU: {} Dice Score: {}" . format (class_names[idx], i, dc) for idx, (i, dc) in enumerate ( zip (np. round (iou, 4 ), np. round (dice_score, 4 )))] display_string = "\n\n" .join(display_string_list) # showing predictions with metrics display_prediction(display_list, display_string) |
Output:
Hence, we have finally performed Image segmentation using TensorFlow with the Oxford IIIT pet dataset.
Image Segmentation Using TensorFlow
Image segmentation refers to the task of annotating a single class to different groups of pixels. While the input is an image, the output is a mask that draws the region of the shape in that image. Image segmentation has wide applications in domains such as medical image analysis, self-driving cars, satellite image analysis, etc. There are different types of image segmentation techniques like semantic segmentation, instance segmentation, etc. To summarize the key goal of image segmentation is to recognize and understand what’s in an image at the pixel level.
For the image segmentation task, we will use “The Oxford-IIIT Pet Dataset” which is free to use dataset. They have 37 category pet dataset with roughly 200 images for each class. The images have large variations in scale, pose and lighting. All images have an associated ground truth annotation of breed, head ROI, and pixel-level trimap segmentation. Each pixel is classified into one of the three categories:
- Pixel belonging to the pet
- Pixel bordering the pet
- Pixel belongs neither in class 1 nor in class 2
Contact Us