FPGAs for plugable_device

Does plugable devices also support FPGAs?

Hi Chris,

Sorry for the late reply! We have talked through another channel but I’ll post here too for other’s info:

If the FPGA code can connect to TensorFlow through C API, it should work. Here are the overall steps:

  1. Create a PluggableDevice.
  2. Write your custom TensorFlow kernels/ops and register them to TensorFlow through the kernel and op registration C API. (We also extended it to support ResourceVariable ops recently.)
  3. Use the StreamExecutor C API for device execution and memory management.
  4. If you’d like to do graph optimization, your plug-in can register a custom graph optimization pass through the graph optimization C API.
  5. We are also looking into a TF Profiler C API for PluggableDevices.

Here’s a tutorial and example code under construction.

I should also add that PluggableDevice is focused on TensorFlow’s current runtime stack. It may require some migration efforts to work with the new runtime stack.