Handling IllegalArgumentException in Android Studio Project

I am trying to develop an android app to read emotion in real time using OpenCV and Tensoflow Lite. I have a CameraActivity that implements the camera access in my project and it also calls a facialEmotionRecognition class where the Intepreter is implemented. Everyhing looks fine and no eror messages on the code but when i run the project the camera opens but on detection of any image or human face it shuts down the app with the following error on logcat:

java.lang.IllegalArgumentException: Internal error: Failed to run on the given Interpreter: Can not open OpenCL library on this device - dlopen failed: library “libOpenCL.so” not found
Falling back to OpenGL
TfLiteGpuDelegate Invoke: GpuDelegate must run on the same thread where it was initialized.
Node number 68 (TfLiteGpuDelegateV2) failed to invoke.

at org.tensorflow.lite.NativeInterpreterWrapper.run(Native Method)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:163)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:360)
at org.tensorflow.lite.Interpreter.run(Interpreter.java:319)

at org.opencv.android.JavaCameraView$CameraWorker.run(JavaCameraView.java:373)
at java.lang.Thread.run(Thread.java:1012)

Blow id my code :

public class FacialExpressionRecognition {
private static Interpreter interpreter;
private static int INPUT_SIZE;
private static int height = 0;
private static int width = 0;
private GpuDelegate gpuDelegate = null;
private static CascadeClassifier cascadeClassifier;
FacialExpressionRecognition(AssetManager assetManager, Context context, String modelPath, int inputSize) throws IOException {
INPUT_SIZE = inputSize;
// Set GPU for interpreter
Interpreter.Options options = new Interpreter.Options();
gpuDelegate = new GpuDelegate();
// Add Gpu to options
// Now set number of threads to options
options.setNumThreads(8); // This should be set according to your phone
interpreter = new Interpreter(loadModelFile(assetManager, modelPath), options);
// If model is loaded print
Log.d(“facial_expression”, “model is loaded”);

    // Now let's load the haarcascade classifier
    try {
        // define input stream to read classifier
        InputStream is = context.getResources().openRawResource(R.raw.haarcascade_frontalface_alt);
        // Create a folder
        File cascadeDir = context.getDir("cascade", Context.MODE_PRIVATE);
        // Now create a file in that folder
        File mCascadeFile = new File(cascadeDir, "haarcascade_frontalface_alt");
        // Now define output stream to transfer data to file we created
        FileOutputStream os = new FileOutputStream(mCascadeFile);
        // Now let's create buffer to store byte
        byte[] buffer = new byte[4096];
        int byteRead;
        // read byte in while loop
        // when it read -1 that means no data to read
        while ((byteRead = is.read(buffer)) != -1){
            // writing on mCascade file
            os.write(buffer, 0, byteRead);
        } // close input and output stream
        cascadeClassifier = new CascadeClassifier(mCascadeFile.getAbsolutePath());
        // if cascade file is loaded print
        Log.d("facial_expression", "Classifier is loaded");
    catch (IOException e){

public static Mat recognizeImage(Mat mat_image){
   // Before predicting our image is not aligned properly
    // we have to rotate it by 90 degrees for proper prediction
    Core.flip(mat_image.t(), mat_image, 1);
    // Start with our process; convert mat_image to gray scale image
    Mat grayscaleImage = new Mat();
    Imgproc.cvtColor(mat_image, grayscaleImage, Imgproc.COLOR_RGB2GRAY);
    // Ste height and width
    height = grayscaleImage.height();
    width = grayscaleImage.width();

    // define minimum height of face in original image
    // below this size no face in original image will show
    int absoluteFaceSize = (int)(height*0.1);
    // now create MatOfRect to store face
    MatOfRect faces = new MatOfRect();
    // Check if cascadeClassifier is loaded or not
    if (cascadeClassifier != null) {
        cascadeClassifier.detectMultiScale(grayscaleImage, faces, 1.1, 2, 2,
                new Size(absoluteFaceSize, absoluteFaceSize), new Size());
    // now convert it to an array
    Rect[] faceArray = faces.toArray();
    // loop through each face
    for (int i = 0; i < faceArray.length; i++){
        // if you want to draw rectangle around the face
        //                input/output starting point and ending point          color R  G   B   alpha       thickness
        Imgproc.rectangle(mat_image, faceArray[i].tl(), faceArray[i].br(), new Scalar(0, 255, 0, 255), 2);
        // Now crop face from original frame and grayscaleImage

        Rect roi = new Rect((int)faceArray[i].tl().x,(int)faceArray[i].tl().y,
        Mat cropped_rgba = new Mat(mat_image, roi);
        // now convert cropped_rgba to bitmap
        Bitmap bitmap = null;
        bitmap = Bitmap.createBitmap(cropped_rgba.cols(), cropped_rgba.rows(), Bitmap.Config.ARGB_8888);
        Utils.matToBitmap(cropped_rgba, bitmap);
        // resize bitmap to (48, 48)
        Bitmap scaledBitmap = Bitmap.createScaledBitmap(bitmap, 48, 48, false);
        // now convert scaledBitmap to byteBuffer
        ByteBuffer byteBuffer = convertBitmapToByteBuffer(scaledBitmap);
        // now create an object to hold output
        float[][] emotion = new float[1][1];
        // now predict with bytebuffer as an input emotion as an output
        interpreter.run(byteBuffer, emotion);
        // if emotion is recognized print value of it

        // define float value of emotion
        float emotion_v = (float)Array.get(Array.get(emotion, 0), 0);
        Log.d("facial_expression", "Output: " + emotion_v);
        // create a function that return text emotion
        String emotion_s = get_emotion_text(emotion_v);
        // now put text on original frame(mat_image)
        Imgproc.putText(mat_image, emotion_s + " (" + emotion_v + ")",
                new Point((int)faceArray[i].tl().x + 10, (int)faceArray[i].tl().y + 20),
                1, 1.5, new Scalar(0, 0, 255, 150), 2);


    // After prediction rotate mat_image -90 degrees
    Core.flip(mat_image.t(), mat_image, 0);
    return mat_image;

private static String get_emotion_text(float emotionV) {
    // create an empty string
    String val = "";
    // use if statement to determine val
    if (emotionV >= 0 & emotionV < 0.5){
        val = "Surprise";
    } else if (emotionV >= 0.5 & emotionV < 1.5) {
        val = "Fear";
    } else if (emotionV >= 1.5 & emotionV < 2.5) {
        val = "Angry";
    }else if (emotionV >= 2.5 & emotionV < 3.5) {
        val = "Neutral";
    } else if (emotionV >= 3.5 & emotionV < 4.5) {
        val = "Sad";
    } else if (emotionV >= 4.5 & emotionV < 5.5) {
        val = "Disgust";
    } else {
        val = "Happy";
    return val;

private static ByteBuffer convertBitmapToByteBuffer(Bitmap scaledBitmap) {
    ByteBuffer byteBuffer;
    int size_image = INPUT_SIZE; //48

    byteBuffer = ByteBuffer.allocateDirect(4 * 1*size_image * size_image*3);

// byteBuffer=ByteBuffer.allocateDirect(41size_imagesize_image3);
// 4 is multiplied for float input
//3 is multiplied for rgb
int[] intValues = new int[size_image*size_image];
scaledBitmap.getPixels(intValues, 0, scaledBitmap.getWidth(), 0, 0, scaledBitmap.getWidth(), scaledBitmap.getHeight());
int pixel = 0;
for (int i = 0; i < size_image; ++i){
for (int j = 0; j < size_image; ++j){
final int val = intValues[pixel++];
// now put float value to bytebuffer
// scale image to convert image from 0-255 to 0-1
byteBuffer.putFloat((((val >> 16) & 0xFF))/255.0f);
byteBuffer.putFloat((((val >> 8) & 0xFF))/255.0f);
byteBuffer.putFloat(((val & 0xFF))/255.0f);
System.out.println("Position before rewind: " + byteBuffer.position());

// Make sure to reset the position to the beginning of the buffer

    System.out.println("Position after rewind: " + byteBuffer.position());

    return byteBuffer;

private MappedByteBuffer loadModelFile(AssetManager assetManager, String modelPath) throws IOException {
    AssetFileDescriptor assetFileDescriptor = assetManager.openFd(modelPath);
    FileInputStream inputStream = new FileInputStream(assetFileDescriptor.getFileDescriptor());
    FileChannel fileChannel = inputStream.getChannel();

    long startOffset = assetFileDescriptor.getStartOffset();
    long declaredLength = assetFileDescriptor.getDeclaredLength();
    return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);


For Camera Activity:

public class CameraActivity extends org.opencv.android.CameraActivity {
private Mat mRgba;
private Mat mGray;
CameraBridgeViewBase cameraBridgeViewBase;
private FacialExpressionRecognition facialExpressionRecognition;
protected void onCreate(Bundle savedInstanceState) {

    cameraBridgeViewBase = findViewById(R.id.camera_view);
    cameraBridgeViewBase.setCvCameraViewListener(new CameraBridgeViewBase.CvCameraViewListener2() {
        public void onCameraViewStarted(int width, int height) {
            mRgba = new Mat();
            mGray = new Mat();

        public void onCameraViewStopped() {

    public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
        mGray = inputFrame.gray();
        mRgba = inputFrame.rgba();


        return mRgba;


    // This will load Cascade Classifier and the model
    // This will only happen one time when the CameraActivity is started
    try {
        int inputSize = 48;
        facialExpressionRecognition = new FacialExpressionRecognition(getAssets(), CameraActivity.this,
                "model300.tflite", inputSize);
    } catch (IOException e){

// getPermission();

protected void onResume() {

protected void onDestroy() {


protected void onPause() {

protected List<? extends CameraBridgeViewBase> getCameraViewList() {
    return Collections.singletonList(cameraBridgeViewBase);

    public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
        super.onRequestPermissionsResult(requestCode, permissions, grantResults);

        // Ensure that this result is for the camera permission request
        if (requestCode == 101 && grantResults.length > 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
            // The camera permission was granted, enable the camera view

Please, any assistance will be appreciated as i have been stuck with this for weeks and my deadline is closing in. Thank you

@Stephen_Nnamani it seems like the issue you’re encountering is related to the TensorFlow Lite GPU delegate and OpenCL libraries. The error suggests that it’s falling back to OpenGL due to difficulties opening the OpenCL library (“libOpenCL.so”). Let’s troubleshoot a few potential areas:

  1. Ensure that your Android device supports OpenCL. The error might stem from missing OpenCL libraries. Verify the OpenCL support in your device specifications.
  2. The GPU delegate needs to be initialized on the same thread where it’s created. Confirm that instances of FacialExpressionRecognition are created and used on the same thread.
    3.TensorFlow Lite interpreter and GPU delegate might not be thread-safe. Make sure that all interpreter calls occur on the thread where the interpreter was initialized. In your case, confirm that FacialExpressionRecognition.recognizeImage(mRgba) is on the same thread.
  3. Move interpreter initialization to the onCreate method of FacialExpressionRecognition. Typically, onCreate is executed on the main thread.

I have tried to modify the version of your FacialExpressionRecognition class with these adjustments:

// Your existing code...

public class FacialExpressionRecognition {
    // Existing code...

    public FacialExpressionRecognition(AssetManager assetManager, Context context, String modelPath, int inputSize) throws IOException {
        INPUT_SIZE = inputSize;

        // Move GPU delegate initialization to onCreate
        initializeInterpreter(assetManager, modelPath);
        // Load the haarcascade classifier
        // ... (your existing code)

    // Initialize interpreter in onCreate
    private void initializeInterpreter(AssetManager assetManager, String modelPath) throws IOException {
        Interpreter.Options options = new Interpreter.Options();
        gpuDelegate = new GpuDelegate();

        interpreter = new Interpreter(loadModelFile(assetManager, modelPath), options);

        Log.d("facial_expression", "Model is loaded");

    // Existing code...

    public static Mat recognizeImage(Mat mat_image) {
        // Existing code...

        // Now predict with bytebuffer as an input emotion as an output
        interpreter.run(byteBuffer, emotion);
        // Existing code...

    // Existing code...

Give these adjustments a try, and remember to check your Android device for OpenCL support. If issues persist, let me know and we can work something out.

Thanks for your help and i have tried restructuring my code as you advised and my android device is Samsung S23 Ultra which i have checked and it has OpenCL but the trouble persists.