Tensorflow Task Library - Max 4 clases detected

Hello, we are working on Android application.
For model creation we are using Google Colab.
We created our model which contains 8 classes. Unfortunatelly we have issue with detection of more then 4 classes. Our first model was containing only 4 clases - all single word named. After our dataset update we added 4 more classes into the model, but in application we are still able to detect only 4 previous (old) classes. The new clases contains name in this way: “car-main” etc. could that cause the problem?
In the Google colab after data validatations we are getting all 8 classes with validation score . In our app we are working with Task Library for connection with .tflite model.

Edit: Things what I forget mention:
First 4 classes have validation score in this range: from 70 % to 30 %.
Last 4 classes have validation score in this range: from 28 % to 11 %.
-both after quantization.
We cannot detect new classes even on the test image.
Tensorflow version: 2.5.0.

Thank you for any suggestions!

Hi Civo,

let me try to understand, you:

  • fine tuned a model with 4 classes. That worked fine
  • you’ve added another 4 classes, fine tuned again, the model works but with very low accuracy
  • on-device the model is not recognizing any of the new classes.

Did I get it right?

some suggestions:

  • by your results, it seems that the quality of the data used for training the additional 4 classes is worse than the original 4. Maybe you might need to look into that with more data for example
  • since the accuracy is low, Task Library might have a default scoreThreshold that is too high for the new classes and so it’s filtering out them during inference.
1 Like

Hi Igusm,
thank you for the answer. Let me explain our issue again.
We had dataset with our images. Each image was labelled with 4 classes. From this dataset we created our model with help of google colab:

train_data=object_detector.DataLoader.from_pascal_voc('...','...'['Truck','Car','Wheel','Jet'])
validation_data =...

model=object_detector.create(train_data,model_spec=spec,epochs=80,batch_size=8,train_whole_model=True, validation_data=validation_data)

Our accuracy aftet the quantization

{‘AP’: 0.57462365,
‘AP50’: 0.69849122,
‘AP75’: 0.58790124,
‘AP_/Truck’: 0.7065886,
‘AP_/Car’: 0.5276099,
‘AP_/Jet’: 0.70565295,
‘AP_/Wheel’: 0.49516096,

After this we created new model, from the same images but each image was re-labeled (improvement for bounding boxes) + added 4 new classes.
Now from the new model we are getting these results after the quantization:

{‘AP’: 0.37384665,
‘AP50’: 0.49949104,
‘AP75’: 0.39470323,
‘AP_/Truck’: 0.6365886,
‘AP_/BVP’: 0.16584158,
‘AP_/Pilon’: 0.035643563,
‘AP_/Ship’: 0.27283004,
‘AP_/Scooter’: 0.21544555,
‘AP_/Car’: 0.50354499,
‘AP_/Jet’: 0.61459885,
‘AP_/Wheel’: 0.36116096,

And we are able to detect always only 4 previous classes (Car,Jet,Truck,Wheel), even if we set our threshold to 0.01f. We tried to detect all classes also with python script with random image, but the results are the same.
Edit: Updated accuracy.
Thanks for any advices!

I guess I understood.

for these new classes, are the number of training instances similar to the number of the original classes?

another thing you could do is change the model spec to one with higher accuracy (I think you are using EfficientDet 0, maybe try the 3 just to see the accuracy)

Changing the model spec might help a little but my gut feeling is that there’s something in your dataset.

Another thing you could do to have more information is to plot a confusion matrix and try to understand where your models getting lost (which class it’s misclassifing and need more samples)

We are actually working with EfficientDet 4 as we don’t need to have real - time recognition.
Labels:

Wheels - more than 20 000
Cars - more than 15 000
Trucks - more than 2 000
Jets - more than 2 000
Ships - more than 2 000
BVPs - more than 2 000
Scooters - more than 2 000
Pilon - more than 2 000

We just can’t figure out what could case the problem. Our dataset will be in final prabably around 3 - 4 times bigger, but for now if we compare the numbers of labels - we have around 10 times more labels for first main classes, but if we look on the accuracy rate, there is not such a big difference between the accuracy score. Wheels and cars should be the most accurate according to the numbers of labels, but as we can see, the most accurate are jets with trucks and the less accurate are pilons - still with similiar labelling number.

Maybe we are missing something, we are new in machine learning sector, so every advice is appreciate :slight_smile:

Try the confusion matrix to find where the model is making mistakes.

Other things I’d test:

  • create a model with the new classes only and see how is the accuracy.
  • test a model without quantization and compare the results
  • plot the accuracy/loss to see if more epochs would improve the results

One last resource I’d try is training your model on AutoML Vision for Object Detection. Since you have all the data ready it would be a quick test (quick to setup but training takes time).
The main difference would be that AutoML does way deeper changes to the model than what Model Maker does. From this you’ll get a very good benchmark of what your data can provide in terms of accuracy. The drawback is that you have to pay for the training. You’ll also get a tflite model in the end.

Are these accuracies on the training or the validation/test set?