Please add support for multiple GPUs in browser-side TF

I’m working with Tensorflow.js in the browser (not via Node.js) on my first AI project and making great progress, in general, but I’ve hit a frustrating roadblock of sorts and not finding any good solutions so far. I have a need to simply expand my GPU capacity substantially but currently for Tensorflow.js in the browser, there is no “tf.device” option like what is available with Tensorflow Core, to be able to make use of multiple GPUs on the server I run my app on.

While at first it may seem like the need to use multiple GPUs on a browser-based application implementation is very very rare, I don’t think the need for this is as rare as it might seem. There are a huge number of javascript developers who have no experience whatsoever with server-side javascript (Node) or Python, but have only experience building javascript apps in a browser. It seems short-sighted to me to not enable a multiple-GPU capability for such developers and apps considering that it does not seem that it would require a huge amount of development work to enable this capability. The lack of such capability forces developers who are comfortable developing javascript apps in a browser environment to have to try to figure out the server-side stack of Node and all the other server-side components that must be enabled to do server-side development. I have worked on it and just getting a functional Node environment on my development platform of choice (Windows Server) was quite a challenge that has been very frustrating and time-consuming and prone to subtle incompatibilities between different versions of all the various software components. And additionally, even if I do implement server-side tensorflow.js in Node, then I also have to figure out how to integrate my front-end javascript app with the backend Node implementation for the tensorflow.js support for the tensor crunching my front-end app needs access to.

In short, it’s a nightmare to come this far and find that my browser-based implementation has no means to access a substantial increase in GPU capacity without completely rearchitecting my application which, except for the need to utilize Tensorflow.js with more capacity than what a single GPU can provide, works absolutely great as a browser-based javascript app and is easy and intuitive to use.

As such, I strongly urge you to PLEASE implement support in tensorflow.js to include support for tf.device capability so that a browser-based TensorFlow app can utilize multiple GPUs for much-expanded tensor crunching capacity and speed without having to completely rearchitect the application.

@Jason might be able to help

Thanks for moving the question to the forum @CherriGirl.

Just to confirm before I loop others in - are you proposing here that the multiple GPUs are on the client side device or are you looking to leverage remote GPUs on other client machines somewhere else to do some work?

My main concern here is that if it is to leverage multiple GPUs on one machine (Eg yours) extremely few people have multiple GPUs at home to make use of that if it were possible (and is most likely dependent on OS/Browser if that is even possible).

If you are talking about leveraging remote GPUs somehow via client side JS to distribute training or such, then that seems like it could effect a larger number of people - especially given the GPU shortage right now and may have greater scope.

Let me know. Thanks!