Transformer tensorflow lite model to android?

I’m developing a real-time translation app for android. First I created a deep learning model using a “Transformer” and I already have trained model saved and converted as a"model.tflite" file. I would like to know how can I use that model in android for translating my language pairs english to portuguese.

I searched everywhere and I keep running into all sorts of errors using android studio and they all seem like unrelated errors. Too much to just paste here.

Hi @Artorius, Please refer to this document to know how to use tflite models on android. Thank You.

1 Like

Hi @Artorius
There are multiple questions here:
How many MBs is your model?
What kind of errors are you facing? When you init your model or during inference?
For this kind of usage probably you have to go to low level inference with TensorFlow Lite Interpreter as @Kiran_Sai_Ramineni has suggested. What have you tried so far from the docs?

Regards

1 Like

Hi @George_Soloupis,
I checked the documentation as @Kiran_Sai_Ramineni suggested, but i couldn’t find an example for translation using a transformer on Android like the model in the example on tensorflow Neural machine translation with a Transformer and Keras.

You are about the inference part thats were i have problems. This my code:

package com.example.projecttranslateusingjava;

import android.app.Activity;
import android.content.res.AssetFileDescriptor;
import android.os.Bundle;
import android.widget.Button;
import android.widget.EditText;
import android.widget.TextView;

import org.tensorflow.lite.Interpreter;

import java.io.FileInputStream;
import java.io.IOException;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import java.util.Objects;

public class MainActivity extends Activity {

private static final String MODEL_FILENAME = "converted_model.tflite";
private static final int MAX_LENGTH = 128;
private static final String[] INPUT_VOCAB = {"<start>", "hello", "how", "are", "you", "<end>"};
private static final String[] OUTPUT_VOCAB = {"<start>", "salut", "comment", "allez", "vous", "<end>"};

private EditText inputText;
private TextView outputText;
private final Map<String, Integer> inputVocabIndexMap = new HashMap<>();
private final Map<String, Integer> outputVocabIndexMap = new HashMap<>();

@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);

    inputText = findViewById(R.id.input_text);
    outputText = findViewById(R.id.output_text);
    Button translateButton = findViewById(R.id.translate_button);

    // Initialize input and output vocab index maps
    for (int i = 0; i < INPUT_VOCAB.length; i++) {
        inputVocabIndexMap.put(INPUT_VOCAB[i], i);
    }
    for (int i = 0; i < OUTPUT_VOCAB.length; i++) {
        outputVocabIndexMap.put(OUTPUT_VOCAB[i], i);
    }

    // Load the TensorFlow Lite model from the assets folder
    MappedByteBuffer modelBuffer = null;
    try {
        modelBuffer = loadModelFile();
    } catch (IOException e) {
        e.printStackTrace();
    }
    Interpreter interpreter = new Interpreter(Objects.requireNonNull(modelBuffer));

    // Set up the click listener for the translate button
    translateButton.setOnClickListener(v -> {
        // Get the input text and prepare it for the model
        String inputString = inputText.getText().toString();
        String[] input = prepareInput(inputString);

        // Prepare output data
        int[] output = new int[MAX_LENGTH];
        Arrays.fill(output, outputVocabIndexMap.get("<end>"));

        // Run the model
        Map<Integer, Object> inputMap = new HashMap<>();
        inputMap.put(0, prepareInput(inputString));
        Map<Integer, Object> outputMap = new HashMap<>();
        outputMap.put(0, output);
        interpreter.runForMultipleInputsOutputs(new Object[] {input}, outputMap);


        // Decode the output
        String outputString = decodeOutput(output);
        outputText.setText(outputString);
    });
}

private MappedByteBuffer loadModelFile() throws IOException {
    AssetFileDescriptor fileDescriptor = getResources().getAssets().openFd(MODEL_FILENAME);
    FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
    FileChannel fileChannel = inputStream.getChannel();
    long startOffset = fileDescriptor.getStartOffset();
    long declaredLength = fileDescriptor.getDeclaredLength();
    return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
}

private String[] prepareInput(String inputString) {
    String[] input = new String[MAX_LENGTH];
    Arrays.fill(input, "<pad>");
    String[] tokens = inputString.toLowerCase().split(" ");
    System.arraycopy(tokens, 0, input, 0, tokens.length);
    input[Math.min(tokens.length, MAX_LENGTH - 1)] = "<end>";
    return input;

}
private String decodeOutput(int[] output) {
StringBuilder sb = new StringBuilder();
for (int i = 1; i < output.length; i++) {
if (output[i] == outputVocabIndexMap.get(“”)) {
break;
}
sb.append(OUTPUT_VOCAB[output[i]]);
sb.append(" ");
}
return sb.toString().trim();
}}

And this is an error i am facing right now:

03/13 12:37:02: Launching 'app' on Lenovo TB-X606F.
Install successfully finished in 16 s 33 ms.
$ adb shell am start -n "com.example.projecttranslateusingjava/com.example.projecttranslateusingjava.MainActivity" -a android.intent.action.MAIN -c android.intent.category.LAUNCHER
Connected to process 31691 on device 'lenovo_tb_x606f-HPV4BJEB'.
Capturing and displaying logcat messages from application. This behavior can be disabled in the "Logcat output" section of the "Debugger" settings page.
D/ApplicationPackageManager: hasSystemFeature android.software.picture_in_picture com.example.projecttranslateusingjava
I/InterpreterApi: Loaded native library: tensorflowlite_jni
I/InterpreterApi: Didn't load native library: tensorflowlite_jni_gms_client
I/tflite: Initialized TensorFlow Lite runtime.
I/tflite: Created TensorFlow Lite delegate for select TF ops.
I/tflite: TfLiteFlexDelegate delegate: 42 nodes delegated out of 271 nodes with 13 partitions.
I/tflite: Replacing 42 node(s) with delegate (TfLiteFlexDelegate) node, yielding 26 partitions.
I/tflite: Replacing 1 node(s) with delegate (TfLiteFlexDelegate) node, yielding 1 partitions.
I/tflite: Replacing 1 node(s) with delegate (TfLiteFlexDelegate) node, yielding 2 partitions.
I/tflite: Replacing 8 node(s) with delegate (TfLiteFlexDelegate) node, yielding 4 partitions.
W/nslateusingjava: type=1400 audit(0.0:12721): avc: denied { read } for name="u:object_r:vendor_default_prop:s0" dev="tmpfs" ino=397 scontext=u:r:untrusted_app:s0:c175,c256,c512,c768 tcontext=u:object_r:vendor_default_prop:s0 tclass=file permissive=0
E/libc: Access denied finding property "ro.hardware.chipname"
I/tflite: Created TensorFlow Lite XNNPACK delegate for CPU.
I/SurfaceFactory: [static] sSurfaceFactory = com.mediatek.view.impl.SurfaceFactoryImpl@86f6e4
D/ViewRootImpl[MainActivity]: hardware acceleration = true , fakeHwAccelerated = false, sRendererDisabled = false, forceHwAccelerated = false, sSystemRendererDisabled = false
V/PhoneWindow: DecorView setVisiblity: visibility = 0, Parent = android.view.ViewRootImpl@52e9d50, this = DecorView@4a78b49[MainActivity]
E/GraphicExt: Can't load libboost_ext_fwk
E/GraphicExt: GraphicExtModuleLoader::CreateGraphicExtInstance false
I/GPUD: @gpudInitialize: successfully initialized with GL, dbg=0 mmdump_dbg=0 mmpath_dbg=0
D/Surface: Surface::connect(this=0x739aff0000,api=1)
D/Surface: Surface::setBufferCount(this=0x739aff0000,bufferCount=3)
D/Surface: Surface::allocateBuffers(this=0x739aff0000)
W/Gralloc3: mapper 3.x is not supported
E/ion: ioctl c0044901 failed with code -1: Invalid argument
W/System: A resource failed to call close. 
W/System: A resource failed to call close. 
I/AssistStructure: Flattened final assist data: 1140 bytes, containing 1 windows, 4 views
D/AndroidRuntime: Shutting down VM
E/AndroidRuntime: FATAL EXCEPTION: main
    Process: com.example.projecttranslateusingjava, PID: 31691
    java.lang.IllegalArgumentException: Internal error: Failed to run on the given Interpreter: Op type not registered 'CaseFoldUTF8' in binary running on localhost. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
    tensorflow/lite/kernels/reshape.cc:85 num_input_elements != num_output_el
        at org.tensorflow.lite.NativeInterpreterWrapper.run(Native Method)
        at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:247)
        at org.tensorflow.lite.InterpreterImpl.runForMultipleInputsOutputs(InterpreterImpl.java:107)
        at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:80)
        at com.example.projecttranslateusingjava.MainActivity.lambda$onCreate$0$com-example-projecttranslateusingjava-MainActivity(MainActivity.java:74)
        at com.example.projecttranslateusingjava.MainActivity$$ExternalSyntheticLambda0.onClick(Unknown Source:4)
        at android.view.View.performClick(View.java:7147)
        at android.view.View.performClickInternal(View.java:7120)
        at android.view.View.access$3500(View.java:804)
        at android.view.View$PerformClick.run(View.java:27538)
        at android.os.Handler.handleCallback(Handler.java:883)
        at android.os.Handler.dispatchMessage(Handler.java:100)
        at android.os.Looper.loop(Looper.java:214)
        at android.app.ActivityThread.main(ActivityThread.java:7399)
        at java.lang.reflect.Method.invoke(Native Method)
        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:502)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:980)

So the model works in Python but it does not work inside android with the above error?
This seems an unsupported operator.
java.lang.IllegalArgumentException: Internal error: Failed to run on the given Interpreter: Op type not registered 'CaseFoldUTF8' in binary running on localhost. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.)tf.contrib.resamplershould be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
Let me tag @khanhlvg for some help.

@George_Soloupis, Yeah that’s exactly right. I can’t seem to use inference on the model when i try to run the code on android studio. But previously to that I had another error before I changed the input type from float32 to string I don’t know if this may give a clue? :

FATAL EXCEPTION: main
Process: com.example.projecttranslateusingjava, PID: 7813
java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.projecttranslateusingjava/com.example.projecttranslateusingjava.MainActivity}: java.lang.IllegalArgumentException: Cannot convert between a TensorFlowLite tensor with type STRING and a Java object of type [F (which is compatible with the TensorFlowLite type FLOAT32).
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3270)
at

This shows a different error about the inputs that where expected and seems really nice that based on the log’s feedback you managed to solve it.
The other error above is more complicated, it seems an unsupported operator.
Here is the documentation for some custom ops.

1 Like

Are you using the latest version of TFLite on Android (2.11)?

The op causing the error (CaseFoldUTF8) is listed as a supported op so it’s weird that you got an error running it.

1 Like

@khanhlvg, Yeah i saw it on the tensorflow website in the list of supported features i use the newest of tensorflow lite 2.11.0, tensorflow lite support 0.4.3 and tensorflow lite select tf ops 2.11.0 , unfortunately i also don’t have a clue why it causes this error :

here is my build.gradle:

plugins {
id ‘com.android.application’
}

android {
namespace ‘com.example.projecttranslateusingjava’
compileSdk 33

sourceSets {
    main {
        jniLibs.srcDirs = ['libs', 'libs_arm64-v8a']
    }
}


defaultConfig {
    applicationId "com.example.projecttranslateusingjava"
    minSdk 24
    targetSdk 33
    versionCode 1
    versionName "1.0"

    testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}

buildTypes {
    release {
        minifyEnabled false
        proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
    }
}
compileOptions {
    sourceCompatibility JavaVersion.VERSION_1_8
    targetCompatibility JavaVersion.VERSION_1_8
}

}

dependencies {

implementation 'androidx.appcompat:appcompat:1.6.1'
implementation 'com.google.android.material:material:1.8.0'
implementation 'androidx.constraintlayout:constraintlayout:2.1.4'
implementation 'org.tensorflow:tensorflow-lite:2.11.0'
implementation 'org.tensorflow:tensorflow-lite-support:0.4.3'
implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:2.11.0'

implementation 'androidx.appcompat:appcompat:1.6.1'
testImplementation 'junit:junit:4.13.2'
androidTestImplementation 'androidx.test.ext:junit:1.1.5'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.5.1'

}

I tried to followed the same steps as in the example of the transformer architecture on tensorflow: [Neural machine translation with a Transformer and Keras  |  Text  |  TensorFlow] i then converted the model into a tflite file using python like this:
import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
open(“converted_model.tflite”, “wb”).write(tflite_model)

Is it perhaps because of the shape of my tflite model? the shape is like this:

my input ([{‘name’: ‘serving_default_sentence:0’,
‘index’: 0,
‘shape’: array([], dtype=int32),
‘shape_signature’: array([], dtype=int32),
‘dtype’: numpy.bytes_,
‘quantization’: (0.0, 0),
‘quantization_parameters’: {‘scales’: array([], dtype=float32),
‘zero_points’: array([], dtype=int32),
‘quantized_dimension’: 0},
‘sparsity_parameters’: {}}],

my output [{‘name’: ‘StatefulPartitionedCall_2:0’,
‘index’: 322,
‘shape’: array([], dtype=int32),
‘shape_signature’: array([], dtype=int32),
‘dtype’: numpy.bytes_,
‘quantization’: (0.0, 0),
‘quantization_parameters’: {‘scales’: array([], dtype=float32),
‘zero_points’: array([], dtype=int32),
‘quantized_dimension’: 0},
‘sparsity_parameters’: {}}])

Respected, can you please inform why my deep learning trained model file reduce it’s quality when I convert it into Tflite model.
I trained my model on 15000 images it’s related to forgery detection for 4 classes.
My model file size is 343MB but after converting it’s size remain 110MB.
It’s just provide me prediction of 1 class instead of 4 via using Tflite.

Can you please suggest me alternative?

Respected, can you please help me I am trying to connect Android application to the flask web app(python) by using okhttp for image forgery detection…can you please provide me code or some suggestions how can I upload my image from android to flask and then get response on my Android application please.
I already used Tflite model file in android but after converting my .h5 model file in to Tflite it reduce my model quality.

Hi @Oswa_Iqbal , I have a small request for you. While I don’t mind if you ask a question in this thread along with your response, I kindly ask that you avoid double posting. In other words, please don’t post twice in a row without giving others a chance to respond. Thank you for your consideration.