I have a trained Object Detection model using TFLite Model Maker, I can run the model on my Android app with Task Library but it is currently running on Phone’s CPU, how to let the model running on GPU with Task Library (I am running it from Java, not C++)
Sorry about this confuse word, I mean the detection result from Tensorflow Interpreter is not as accurate as the result from Task Library. For example, I trained a model for Vehicle License Plate detection, while the Task Library can output the detection box that cover 100% the plate, the result from Interpreter output the detection box that cover 1/3 of the plate.
We have been updating the Object detection example the past month. It seems that we have to use 2 different getTransformationMatrix methods one for Task Library and one for Interpreter. Or we have to find an abstract class for drawing the boxes like:
Now it is up to you to change the method mentioned above as the models work fine and the problem is afterwards when you want to render results on sreen.
If you have the project online please paste the link to review the problem.