Salad Detector EfficientDet-Lite2 all classes coming back as 0.0

I’ve been working on training the EfficientDet-Lite2 model to recognize my own object. Almost everything works, the new model outlines my object just fine however my classes always come back as zero. Which means that I can’t tell the difference between my object and something else the model recognizes (such as a person).

Here’s the output of the classes, scores, boxes and count when I run inference on colab. In this case I have a picture of a person, and a picture of my object and it outlines both of them cleanly. But nothing is returned in the classes. What am I missing?

edit: I tried again with the latest release of salad detector and got the same results. If I just run the example as is, I get classes out along with boxes and scores. But after I train it with my data instead, I get boxes for my and other objects, but no classes.

0.3
CLASSES
{‘name’: ‘StatefulPartitionedCall:2’, ‘index’: 783, ‘shape’: array([ 1, 25], dtype=int32), ‘shape_signature’: array([ 1, 25], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}
<class ‘numpy.ndarray’>
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0.]
END_CLASSES
SCORES
{‘name’: ‘StatefulPartitionedCall:1’, ‘index’: 784, ‘shape’: array([ 1, 25], dtype=int32), ‘shape_signature’: array([ 1, 25], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}
<class ‘numpy.ndarray’>
[0.95703125 0.8046875 0.171875 0.13671875 0.1328125 0.109375
0.10546875 0.0703125 0.0703125 0.06640625 0.0625 0.05078125
0.05078125 0.046875 0.04296875 0.04296875 0.04296875 0.04296875
0.0390625 0.0390625 0.0390625 0.0390625 0.0390625 0.03515625
0.03515625]
END_SCORES
BOXES
{‘name’: ‘StatefulPartitionedCall:3’, ‘index’: 782, ‘shape’: array([ 1, 25, 4], dtype=int32), ‘shape_signature’: array([ 1, 25, 4], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}
<class ‘numpy.ndarray’>
[[ 0.10341597 0.07837614 0.29824328 0.2694588 ]
[ 0.0022909 0.35881817 1.0972939 0.92067206]
[ 0.36508366 0.52892387 1.1009921 0.9214685 ]
[ 0.1687806 0.46745497 0.89054465 0.8574684 ]
[ 0.4743509 0.43261927 1.0083578 0.84755 ]
[-0.10687697 0.48249638 0.8727542 1.1378279 ]
[ 0.42514202 0.2712517 1.2569318 0.93022966]
[ 0.01708302 0.06039434 0.26142305 0.28192842]
[-0.26614624 0.2336869 0.8273681 0.9557977 ]
[ 0.09977013 0.38245487 0.5424307 0.8367195 ]
[ 0.10326806 0.13250032 1.0232235 0.78903913]
[ 0.8101959 0.61326325 0.85606337 0.6559801 ]
[ 0.85945106 0.5897225 0.9021679 0.63271666]
[ 0.19703382 0.6065323 0.8779749 0.9492126 ]
[ 0.70410407 0.46735543 0.7527213 0.51062864]
[ 0.8126633 0.5612771 0.85428905 0.6029028 ]
[ 0.40730965 0.27877745 0.9373851 0.5131608 ]
[ 0.20224822 0.5947402 1.1581546 1.156594 ]
[ 0.77688324 0.5397553 0.82071996 0.58247226]
[ 0.8606483 0.64405364 0.9065157 0.6876077 ]
[ 0.8977856 0.6129542 0.9421932 0.6535166 ]
[ 0.10990259 0.10467488 0.28687546 0.20217824]
[ 0.15204903 0.15080991 0.3211866 0.25086924]
[ 0.75420994 0.488369 0.8028272 0.5336468 ]
[ 0.82664245 0.7167883 0.87047917 0.7611959 ]]
END_BOXES
COUNT
{‘name’: ‘StatefulPartitionedCall:0’, ‘index’: 785, ‘shape’: array([1], dtype=int32), ‘shape_signature’: array([1], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}
<class ‘numpy.ndarray’>
25.0
END COUNT

Hi @Eric_Steimle ,

There are few reasons it might happens:

The model might not have enough training data with your specific object to learn its features and distinguish it from other classes.

  1. Missing class mapping: You haven’t provided the mapping between the numerical class indices and their corresponding labels. This information is crucial for interpreting the predicted class labels.
  2. Incorrect class configuration: The model might be configured to expect a different number of classes than your dataset actually contains. This can lead to inconsistencies and missing classes.
  3. Labeling error: There might be a mismatch between the labels assigned to your training data and the actual classes the model should recognize.

Try to check above points and let us know if you still persists the issue.

Thanks.