How to work fully offline

Hello,

I use the standard code :

const _tf = require('@tensorflow/tfjs');
const _tfnode = require('@tensorflow/tfjs-node');
const _mobilenet = require('@tensorflow-models/mobilenet');
const _knnClassifier = require('@tensorflow-models/knn-classifier');

var_myNet = await _mobilenet.load();
var _myClassifier = _knnClassifier.create();
// load saved model
await _myNet.model.load(`file://${folder}`);
// load dataset of classifier
let savedDataSet = getSavedClassifier( modelName);
_myClassifier.setClassifierDataset( savedDataSet);
...
// after some learning
if (_myClassifier.getNumClasses() > 0) {
   const activation = _myNet.infer( img, true);
   const result = await _myClassifier.predictClass( activation, 3);

for my classification program in nodejs.
As you see, the initial model is lost ,and is substituded by my classifier.

But naturly, this doesn’t start when access to Internet is offline.

How to work full offline, since the beginning (for example with a Raspeberry on a robot) ?

Best regards.

@Jason can help here

You would need to use mobilenet directly instead of using our easy to use class wrappers for common models. You can then use model.save() and use local storage for example or to disk, and then load from disk / local storage instead of a URL to something online. To learn how to do this check my course (free) that explains how to load / save models:

1 Like

Hello.

So, what I need is " how to use mobilenet DIRECTLY instead of using our easy to use class wrappers for common models".

Best regards.

I have found some information on Rede Neural Convolucional (CNN)  |  TensorFlow Core, but the sample is for squared pictures [32x32] pixels, which is not very general.

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))

A sample more generic (for exemple with pictures coming from webcam [320,240]) would be useful.

Best regards

Yes my course covers this. Please see the chapter where I take mobile net, load it, from TF-Hub and chop up the layers, retrain a new model and then save the resulting layers model which you could save to localstorage instead using model.save() so then it would work offline if you load from localstorage instead if has been saved already.

Hello Sir, How do I save the facemesh model to my computer’s file system and later load it from this location? I get

Hello. So as this is is a premade JS class by a team at Google you would need to hack that code bundle to reference locally stored cached assets eg model.json / *.bin files for the model it is loading in behind the scenes. I would grab your favourite code editor and check the chrome dev tools network tab when loading in a minimal website that uses it to see what 3P requests are made and then grab those resources and check in the code where they are requested and change them to point to something else that you control instead like localstorage or such.

If you are using TensorFlow.js models directly then you can use the model.save() API to save models to localstorage / indexdb etc using this API:

Given this is a JS class someone else has made though you need to hack the code at that level in this case as it is that code that is dealing with the loading of raw model.json files etc.

Hello,
It’s depending of your architecture.
In my case, I work with a Nodejs application on one side where the Tensorflow model is running, and a web interface on the other side for actions input, wich are sent to nodejs app.
Is it your configuration ?
Best regards.