I wanted to create a system that can identify rubber ducks using the Tensorflow Lite Object Detection model maker that was originally used to create a salad detector in the YouTube tutorials. When I used images taken on my phone (Full resolution on a Samsung Galaxy S21 Ultra is 1440 x 3200 pixels and a 20:9 aspect ratio), the system will get a 0.0 testing accuracy. But when I used photos of rubber ducks from the internet, the system worked find and got a 99% accuracy in testing. I used the same amount of photos (roughly 400), 50 epochs, and changed the Batch Size on the phone photos ranging from 8 to 24. Is there a limit to the size of the photos that Tensorflow Lite can train on? I also tried taking 4:3 aspect ratio photos from my phone, and this didn’t work either.
did you follow this guide: TensorFlow Lite Model Maker를 사용한 객체 감지
One problem that might happen on your images is that they are too big and when the tool tries to crop it for the proper input size it removes by mistake the ducks from it (that my random guess)
On the tutorial I mentined there’s a specific link on preparing your own data and that might give you some insights on how to fix this.
hope it helps