Load pluggable device in tensorflow C API to use Mac M1 GPU

I am trying to use Mac M1 GPU in tensorflow C API.

Apple provides metal plugin for tensorflow. Tensorflow Plugin - Metal - Apple Developer
GPU can be used in python code.

Now I am trying to do that in tensorflow C API via Rust bindings.

   match Library::load("libmetal_plugin.dylib") {
        Ok(lib) => println!("Loaded plugin successfully. {:?}", lib.op_list()),
        Err(e) => println!("WARNING: Unable to load plugin. {}", e),
    };

    // Load the saved model exported in python
    let mut graph = Graph::new();
    let bundle = SavedModelBundle::load(&SessionOptions::new(),
        &["serve"],
        &mut graph,
        export_dir).expect("Unable to load model from disk");

    println!("{:?}", bundle.session.device_list().unwrap() );

tensorflow::Library::load internally calls tensorflow C API TF_LoadLibrary to load the plugin. And it does successfully load the plugin.

Then I tried to enumerate the devices by device_list(). Only CPU is there.

[Device { name: "/job:localhost/replica:0/task:0/device:CPU:0", device_type: "CPU", memory_bytes: 268435456, incarnation: 13748960769752595872 }]

How to use Mac M1 GPU in tensorflow C API?

Tensorflow version 2.9.0

Found the solution , use TF_LoadPluggableDeviceLibrary API

Thanks @gammatrix5 for the issue. Yes this the API TF uses to load the plugin themselves in Python.

Hi @gammatrix5 - any chance you could share the complete solution for loading the dylib?

Also if we were using the C++ TF API would this be even easier?

Thanks so much!
Theo