TF2.3. sparse input name missing in saved model

After the variable-length sparse feature is introduced, the feature name of the exported PB model is lost, and the input parameter name becomes args_0, args_1…

Same QA:

[1] tensorflow - tensorflow2 sparse input name missing in saved model - Stack Overflow

[2] https://github.com/tensorflow/tensorflow/issues/42018

As shown above, when the feature “tags” is FixedLenFeature, the exported PB model input structured_input_signature can normally retain all input feature names, and online TF-Serving can be used normally.

However, if the feature “tags” is VarLenFeature variable-length sparse type, the feature names in the structured_input_signature in the exported PB model are all lost, and become args_0, args_1…
And online TF-Serving, when to construct 14 features Example structure, the service reports an error:

Among them, 14 is Offline model definition = 13 Fixed fixed-length features + 1 sparse feature

When the final model is exported, this 1 sparse feature will be decomposed into 3 parameters

(tags as sparse tensor, when stored in PB format, will be divided into three parameters: indices, values and shape)

Therefore, the model input in PB format is 13+3

Below is the code to save the model, both with the same error condition.

Path1

save_path = os.path.join(ckpt_dir, ‘./model’)

model.save(save_path, save_format=‘tf’)

Path2

save_path2 = os.path.join(ckpt_dir, ‘./model2’)

tf.saved_model.save(model, save_path2)

Similarly, when to define

“tags”: tf.io.RaggedFeature(tf.string, row_splits_dtype=tf.int64)

similar error

input size does not match signature: 14!=15

It seems to be a problem with the input function signature.

Do I need to rewrite the signature parameter?

Please explain the specific usage.

Thank you very much~

@Lee_Bruce,

As mentioned here, this cannot be fixed and please use the workaround provided

To treat TF APIs as if they are Keras layers during model construction, we rely on the existing internal tf dispatching mechanism for CompositeTensors (e.g. raggedtensors, sparsetensors) to allow for custom behaviors when built-in tf apis see unknown CompositeTensor types.

Because __new__ doesn’t call __init__ but the failure triggers in __init__, we can’t trigger the current fallback-based dispatching approach on SparseTensor instance construction. And, the fact that tf.SparseTensor is a class means we can’t change the api to just be a method for backwards compatibility reasons.

So, this isn’t fixable (without bad hacks) until the proactive type-checking dispatching described in the extensiontypes rfc is implemented: RFC: TensorFlow Extension Types by edloper · Pull Request #269 · tensorflow/community · GitHub

You should be able to work around this for now by:

  1. putting the tf.SparseTensor creation inside of a keras lambda layer
  2. Putting the tf.SparseTensor creation inside of a custom layer
  3. If you’re okay with using the nightlies, you can pass a type_spec that matches your needs directly to tf.keras.Input creation. This should be available in TF 2.5

Thank you!