Issues in deciphering TensorFlowLite Interpreter output

I have been doing an image classification problem where in the objective is to train a predefined neural network model with set of tfrecords and do inference. This all is happening with reasonable accuracy In Colab.

Subsequent to this I converted the saved_model.pb into model.tflite file. I have checked it against the netron app it is seemingly taking correct inputs (which is an image tensor).

After this I called interpreter.invoke().

Following this when I try to decipher the output tensor I should be able to at least render the output the image, but i am having difficulty in doing this.

This is the link of colab notebook Google Colab where I have maintained the code.

I have other colab notebooks where similar code was done with training for upto 7500 iterations, but i am stuck in all the case at the interpreter level, since i have to port this app on to Android platform

Hi @Vishal_Virmani,
Welcome to the community.

I see at the end of your colab notebook that you try to print the output of the model which is an array of [1,10,4]. So why are you doing that as this is a classification problem as you mentioned? Or this is an object detection problem?

From the previous cells I see that with the model you do:

input_tensor = tf.convert_to_tensor(
      np.expand_dims(image_np, 0), dtype=tf.float32)
  detections, predictions_dict, shapes = detect_fn(input_tensor)

and then you do:

label_id_offset = 1
  image_np_with_detections = image_np.copy()

  viz_utils.visualize_boxes_and_labels_on_image_array(
        image_np_with_detections,
        detections['detection_boxes'][0].numpy(),
        (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
        detections['detection_scores'][0].numpy(),
        category_index,
        use_normalized_coordinates=True,
        max_boxes_to_draw=200,
        min_score_thresh=.5,
        agnostic_mode=False,
  )

  plt.figure(figsize=(12,16))
  plt.imshow(image_np_with_detections)
  plt.show()

so you use the model output, you post process it and then visualize the boxes on the image.

You can think the output of the Interpreter as the output of the saved-model. So you have to do the same procedure there also eg. preprocess the image, make the predictions with the Interpreter, post process and create the boxes on the image.
If indeed this is an Object detection project check a good example here:

Hi @George_Soloupis The objective of doing the above colab notebook is to do face recognition by retraining the existing models( in this case i took ssd_mobilenet_v2_320x320_coco17_tpu-8 after testing for models like SSD MobileNet v2 320 x 320 and EffiecientDet D0 512 x 512 )

I have created a notebook where i am doing correct inference with the images in Colab.

Now my effort was to create a model.tflite file, which initially i was unable to do. After further query i came across that the mechanism of creating tflite is like running

!python /content/gdrive/MyDrive/TFLite_Check/models/research/object_detection/export_tflite_graph_tf2.py
–pipeline_config_path {pipeline_file}
–trained_checkpoint_dir {last_model_path}
–output_directory {outd}

followed by
converter = tf.lite.TFLiteConverter.from_saved_model(’/content/gdrive/MyDrive/TFLite_Check/freezetflite/saved_model/’)

converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
converter.allow_custom_ops=True

I implemented these commands in Colab notebook which i shared earlier, not in the one where i am doing the inferences as they appear in the notebook

I was trying to check if the output of the interpreter call returns an image as i had seen in the colab notebook Selfie2Anime maintained by @Sayak_Paul

This is why i was calling plt.imshow(output()[0])

So how do i go ahead now…

I am trying to understand the notebook link you have shared

I am really lost here. I have not understood the procedure. If you may upload the correct colab notebook to take a look it will be fine! The previous one is just showing an Object detection procedure.

Yeah I am doing it

In here i have inferred correctly 2 people , the model was trained only for these 2 people.
In this notebook i failed to create the tflite model, which i did later in the earlier notebook

@George_Soloupis I keep on getting a prompt that link cannot be shared on the post. So i have shared the same via email notification which i received in my gmail account.

@George_Soloupis In case you don’t receive please let me know .

https://drive.google.com/file/d/1JyO0aPIL_gStEuh8fL2jTrqXlO1wy1RA/view?usp=sharing

@George_Soloupis just shared the link for the notebook

1 Like

@Vishal_Virmani I took a look of the notebook. It seems that it is the same Object detection procedure with square boxes over the image.

Convert the model with the same procedure as the previous notebook. Then if you want to check the output of the interpreter use this:

and if you want to port it in android the example here:

is what you need!

@George_Soloupis I am going through the same.
Initially i thought that just by calling interpreter.invoke() and catching the output tensor, i can display back the image.

I will try to understand and see if i can implement it with the links you have shared

1 Like