Visualize Object detection model SSD mobilenet v2

Can anyone please help me to visualize the inner features of the SSD model at the time of inference, like the feature map of each layer?

2 Likes

BTW I have no clue on how to get access to the last layer of the model. You know class prediction, bbox prediction, etc.

1 Like

You can have access to the inner layers such as the backbone in Keras fashion. First you must run the object detection model similar as the notebook eager_few_shot_od_training_tf2_colab.ipynb, to get the backbone should detection_model.feature_extractor.classifaction_backbone, with that you can extract features.

2 Likes

Thanks for the help, but I did know how to do feature extraction from backbone layers like VGG16, Mobilenet, etc. But I want to do the extraction from the object detection layers.

1 Like

You’re welcome, and I’m also looking at how to extract the object detector layers. If you get an answer, please share it. :smiley:

1 Like

I was tracing back the objectdetection object in the repository of the API. Maybe it was luck, but I found out how to get the object detection layers at least the Prediction head:
detection_model._box_predictor._prediction_heads[‘class_predictions_with_background’]._class_predictor_layers[0]
Hopefully helps

1 Like

my bad my last approach get the layer but no as tensor, I mean the layer doesnt response to layer.outputs[0] because is not a keras tensor to get the feature map. Well I’m still as the last time

1 Like

Considering the feature maps in an SSD model:

What specific feature map do you want take here?

2 Likes

The prediction class layers, but I need at least the last prediction layer as a tensor. I found the prediction_head; this one lacks the output feature. Hence is not compatible to use to get the feature map of the model in that layer.

1 Like

Once you are able to load the detection checkpoints into their respective model class I think it should be possible to access the layers just by their indices.

OD API models may often seem to be a bit opaque. I would also suggest taking a look at the Model Garden object detection models since they seem to be more transparent and accessible with regards to the interpretability aspects:

https://github.com/tensorflow/models/tree/master/official/vision/detection

2 Likes

Thanks, I use to think the same that once a checkpoint is loaded was possible to access the layers by their indices or similar. But was not the case with the object detection API. I will take a look at your proposal nevertheless is frustrating the current state and documentation of the API.

1 Like