Thanks Jason for the reply. Unfortunately, the conversion did not completely solve the problem. I put up the tfjs-converter, loaded my model, converted it to uint8, uint16 as well, but unfortunately I didn’t make any changes with it. Is it possible that this conversion does not affect everything in the model? Would there be another method or should the whole model be rebuilt?
I converted with this command:
tensorflowjs_converter
--input_format=tfjs_layers_model
--metadata
--output_format=tfjs_layers_model
--quantize_uint8=*
--weight_shard_size_bytes=4194304
C:\Users\gyorg\OneDrive\Asztali gép\konvertalt-modell\model.json
C:\Users\gyorg\OneDrive\Asztali gép\konvertalt-modell
This is the content of the resulting model:
{“format”: “layers-model”, “generatedBy”: “keras v2.6.0”, “convertedBy”: “TensorFlow.js Converter v3.13.0”, “modelTopology”: {“keras_version”: “2.6.0”, “backend”: “tensorflow”, “model_config”: {“class_name”: “Sequential”, “config”: {“name”: “sequential_4”, “layers”: [{“class_name”: “InputLayer”, “config”: {“batch_input_shape”: [null, 14739], “dtype”: “float32”, “sparse”: false, “ragged”: false, “name”: “dense_Dense3_input”}}, {“class_name”: “Dense”, “config”: {“name”: “dense_Dense3”, “trainable”: true, “batch_input_shape”: [null, 14739], “dtype”: “float32”, “units”: 100, “activation”: “relu”, “use_bias”: true, “kernel_initializer”: {“class_name”: “VarianceScaling”, “config”: {“scale”: 1, “mode”: “fan_in”, “distribution”: “truncated_normal”, “seed”: null}}, “bias_initializer”: {“class_name”: “Zeros”, “config”: {}}, “kernel_regularizer”: null, “bias_regularizer”: null, “activity_regularizer”: null, “kernel_constraint”: null, “bias_constraint”: null}}, {“class_name”: “Dropout”, “config”: {“name”: “dropout_Dropout2”, “trainable”: true, “dtype”: “float32”, “rate”: 0.5, “noise_shape”: null, “seed”: null}}, {“class_name”: “Dense”, “config”: {“name”: “dense_Dense4”, “trainable”: true, “dtype”: “float32”, “units”: 2, “activation”: “softmax”, “use_bias”: false, “kernel_initializer”: {“class_name”: “VarianceScaling”, “config”: {“scale”: 1, “mode”: “fan_in”, “distribution”: “truncated_normal”, “seed”: null}}, “bias_initializer”: {“class_name”: “Zeros”, “config”: {}}, “kernel_regularizer”: null, “bias_regularizer”: null, “activity_regularizer”: null, “kernel_constraint”: null, “bias_constraint”: null}}]}}}, “weightsManifest”: [{“paths”: [“group1-shard1of1.bin”], “weights”: [{“name”: “dense_Dense3/kernel”, “shape”: [14739, 100], “dtype”: “float32”, “quantization”: {“dtype”: “uint8”, “min”: -0.01743759173972934, “scale”: 0.00013623118546663546, “original_dtype”: “float32”}}, {“name”: “dense_Dense3/bias”, “shape”: [100], “dtype”: “float32”, “quantization”: {“dtype”: “uint8”, “min”: -0.0005331449365864198, “scale”: 3.9201833572530865e-06, “original_dtype”: “float32”}}, {“name”: “dense_Dense4/kernel”, “shape”: [100, 2], “dtype”: “float32”, “quantization”: {“dtype”: “uint8”, “min”: -0.19357375818140365, “scale”: 0.0015362996681063782, “original_dtype”: “float32”}}]}]}
Error message after running the code :
Uncaught (in promise) Error: Weight StatefulPartitionedCall/kpt_offset_0/separable_conv2d_6/separable_conv2d/ReadVariableOp has
unknown quantization dtype float16. Supported quantization dtypes are: 'uint8' and 'uint16'.