Tensorflow lite model works in python but not in java app for Android

This post was flagged by the community and is temporarily hidden.

Hi @George_Zheng

Is it necessary to do that using OpenCV? Do you have restrictions for using that?
Wouldn’t be simpler to load directly the Bitmap from assets folder, resize it and create the ByteBuffer?

Like:

fun loadBitmapFromResources(context: Context, path: String): Bitmap {
val inputStream = context.assets.open(path)
return BitmapFactory.decodeStream(inputStream)
}

var loadedBitmap = loadBitmapFromResources(context, “woman.png”)

val inputBitmap = Bitmap.createScaledBitmap(
loadedBitmap,
width,
height,
true
)

var tensorImage = TensorImage(DataType.FLOAT32)
tensorImage.load(inputBitmap)

TensorBuffer outputBuffer = TensorBuffer.createFixedSize(new int[]{1, 4}, DataType.FLOAT32);
tflite.run(tensorImage.getBuffer(), outputBuffer)
val data2 = outputBuffer. getFloatArray()

and log to see the values of data2.

OR if your model contains metadata you can follow the example here where the ML Model Binding does the preprocessing also:

check the generated code for code sample.

Ping me if you need more help

Hi @George_Soloupis ,

Thanks for your response. We use OpenCV extensively in Python for a multistage classification project (using prerecorded videos as input) to take advantage of its simplicity in matrix manipulation. We are in the phase of porting the code to an Android device. The example I put together is a simplified version for the first stage, and the algorithm does require each input video frame to be turned into gray scale with a single channel and then normalized so pixel values are between 0.0 and 1.0. I see your code example was written in Kotlin, which I am not familiar with, but I was able to turn that code into Java (included below so you can review).

String path = "/storage/emulated/0/Pictures/test_img.png"
try {
        InputStream inputStream = new FileInputStream(path);
        Bitmap loadedBitmap = BitmapFactory.decodeStream(inputStream);
        Bitmap scaledBitmap = Bitmap.createScaledBitmap(loadedBitmap, 256, 256, true);
        TensorImage tensorImage = new TensorImage(DataType.FLOAT32);
        MappedByteBuffer tfliteModel = FileUtil.loadMappedFile(this, "test_model.tflite");
        Interpreter tflite = new Interpreter(tfliteModel);
        TensorBuffer outputBuffer = TensorBuffer.createFixedSize(new int[]{1, 4}, DataType.FLOAT32);
        tflite.run(tensorImage.getBuffer(), outputBuffer.getBuffer());
        ...

Two follow-up questions:

  1. tensorImage contains 3 channels, but I need to turn that into grayscale and get just one channel before feeding the input to the model. What’s the correct way of doing that? The execution of the last line as is currently results in the following exception:
    java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite buffer with 262144 bytes and a Java Buffer with 786432 bytes.
  2. Note that we haven’t divided the pixel values by 255.0 before feeding that into the last line. How can I do that as required by the model, once the above error is fixed?

Thanks again and look forward to your followup.

George

There is also a solution for that!

Add to your project a dependency for Support Library:

implementation ‘org.tensorflow:tensorflow-lite-support:0.2.0’

Then check documentation for ImageProcessor:

There you can directly resize, normalize and get the buffer from one channel like:

fun loadBitmapFromResources(context: Context, path: String): Bitmap {
val inputStream = context.assets.open(path)
return BitmapFactory.decodeStream(inputStream)
}
var loadedBitmap = loadBitmapFromResources(context, “woman.png”)

val imageProcessor = ImageProcessor.Builder()
.add(ResizeOp(width, height, ResizeOp.ResizeMethod.BILINEAR))
.add(TransformToGrayscaleOp())
.add(NormalizeOp(0.0F, 255.0F))
.build()

var tImage = TensorImage(DataType.FLOAT32)

tImage.load(loadedBitmap)
tImage = imageProcessor.process(tImage)

TensorBuffer outputBuffer = TensorBuffer.createFixedSize(new int[]{1, 4}, DataType.FLOAT32);
tflite.run(tImage.getBuffer(), outputBuffer)
val data2 = outputBuffer. getFloatArray()

So the basic operator that will give you one buffer out of three is TransformToGrayscaleOp() . There at this function you can see all the procedure for transforming the bitmap to grayscale and get buffer from one channel. This is an operator I and @Xunkai_Zhang created about 9 months ago :slight_smile:

If you do not want to use TensorFlow Lite Support Library then take a look at this file where there is custom manipulation of the above procedure with plain functions. Convert bitmap to grayscale:

and get buffer from one channel:

Get back if you have more issues.

Thanks

Hi @George_Soloupis , just got the chance to give this a quick try and I am getting the output I was expecting. Thank you so much for the prompt guidance!

George

1 Like

This post was flagged by the community and is temporarily hidden.

I presume that inside colab you are using OpenCV and inside android the standard libraries for loading and resizing. There are some slight difeerences doing so. Check an article I have written about a year ago about those differences:

Best

@George_Soloupis Thanks for the followup. This is very helpful!