Image Classification 100 percent training and validation problem

You could check my colab sheet validation accuracy is 100 also testing accuracy is 100 but it’s predicting wrong for new images and also predicting wrong for the same person with a new image.

From the latest colab it is observed that training accuracy is gradually increasing across epochs which is a good sign.

Since the sample size of validation and test are too less (i.e 10 images) 100% accuracy can be possible.Can you try to increase test size with more images and see if the model is working well on different images of same person which are not included in training.

One Suggestion, since the use case is to track attendance, it is better to train with images which contains face alone by cropping the images because there is a chance that model might learn the features like background or shirt which are not relevant to our use case.

Thank you!

1 Like

I’ll try to increase the test size also about face cropping I was thinking about it. Thanks for the suggestion and help. Let me try and I’ll update you about it.

I’ve tried it’s performing well but when I deploy the tflite model file on android it predicts unknown images from existing classes. Maybe we should introduce an unknown class? Really need your suggestion.

I’ve tried it’s performing well

Good to hear.

it predicts unknown images from existing classes.

Can you elaborate the above statement?

Thank you!

1 Like

When I train images on 33 and 55 two classes it predicts fine for both on testing. But when I test the model on random images like food, trees, or shoes it predicts those images as 33 or 55.

You can test the model only on the trained classes (i.e 33 and 55).

If we want the model to detect food, trees or shoes then we should include those images(i.e classes) in training so that the model can detect perfectly.

Thank you!

1 Like

I want model to not predict anything outside its classes. That’s what I want.

If your goal is to detect other persons/objects other than those two classes(i.e.33, 55), you have to add one more class and train the model from scratch with images which do not contain images of 33 and 55 and also with other classes (i.e.33, 55).

Thank you!

1 Like

I don’t want to detect other persons, I want the model to not predict anything when seeing other images.

About adding another class you mean to say add another class with empty images which work as our else class. It will predict none or something else when seeing other images. Right?

We can’t stop the model from predicting from the classes it is trained.

As per above comment, we have to add one more class with label as other and trained the model with images complement to other two classes(i.e.33, 55).

Thank you!

1 Like

Like adding random images to other class! right?

Like adding random images to other class! right?

Yes

1 Like

@chunduriv
Hi,
I just tried facial recognition with face cropping and validation and training accuracy is OK but on testing accuracy, it’s not that good what could be the problem?

I’m using the inception pre-trained model on the facial recognition problem is it okay?

I also added none class to dataset but even for a tree it’s predicting some persons id how could I tackle this.

Hi,
I have concern about trained model.
Model is predicting fine for id 33 person but for new person or some other object it predicts same id.
I also added none class accuracy is fine but on prod it’s predicting label of a person class even of unknown objects

Results of test data depend on the quality of training data and its distribution. If the training data is diverse and abundant, we can expect the model to work well.

Changing the pre-trained model will not bring significant change in test accuracy but fine tuning the pre-trained model with good data may bring desired results.

Thank you!

1 Like