Bert NeuSpell Tokenizer graph used as SavedModelBundle issue

I have been consistently to run the Bert Neuspell Tokenizer graph as SavedModelBundle using Tensorflow core platform 0.4.1 in Scala App, for some bizarre reason in last day or so without making any change to code that generates the tensor output, i keep getting this error below:
org.tensorflow.exceptions.TFInvalidArgumentException: ConcatOp : Ranks of all input tensors should match: shape[0] = [1,1] vs. shape[1] = [3]
** [[{{function_node __inference_serve_549812}}{{node RaggedConcat/concat}}]]**

Interestingly this looks like one of internal nodes, i am just passing single Input as TString to one of input functions. ‘serving_default_text’ and then fetching the output from the function ‘StatePartitionedCall_2’ here is signature def dump of this graph.

{serving_default=inputs {
key: “text”
value {
name: “serving_default_text:0”
dtype: DT_STRING
tensor_shape {
unknown_rank: true
}
}
}
outputs {
key: “output_0”
value {
name: “StatefulPartitionedCall_2:0”
dtype: DT_INT64
tensor_shape {
dim {
size: 1
}
dim {
size: -1
}
}
}
}
outputs {
key: “output_1”
value {
name: “StatefulPartitionedCall_2:1”
dtype: DT_INT64
tensor_shape {
dim {
size: 1
}
dim {
size: -1
}
}
}
}
outputs {
key: “output_2”
value {
name: “StatefulPartitionedCall_2:2”
dtype: DT_INT64
tensor_shape {
dim {
size: -1
}
dim {
size: -1
}
}
}
}
method_name: “tensorflow/serving/predict”
, __saved_model_init_op=outputs {
key: “__saved_model_init_op”
value {
name: “NoOp”
tensor_shape {
unknown_rank: true
}
}
}

I load the graph using resource file pattern where you have saved_model.pb, and variables folder.
i can go through debugger and see graph is loaded and i can also print out all signature defs but when i use this syntax i keep getting the error above from some intermediate node operation.

syntax to invoke is using savedmodelbundle instance
a) create session runner val tokenizerSession = savedModelBundle.session().runner()

b) then invoke .fetch and .feed(multiple times) to pull all outputs.
val testQuery = “psycologist” //misspelled.
val input = TString.tensorOfBytes(NdArrays.vectorOfObjects(testQuery.getBytes(StandardCharsets.UTF_8)))

val tensors = tokenizerSession.feed(“serving_default_text”, input).fetch(“StatefulPartitionedCall_2”, 0).fetch(“StatefulPartitionedCall_2”, 1).fetch(“StatefulPartitionedCall_2”, 2).run()
this command generates this stack trace

Exception in thread “zio-fiber-65” org.tensorflow.exceptions.TFInvalidArgumentException: ConcatOp : Ranks of all input tensors should match: shape[0] = [1,1] vs. shape[1] = [3]
[[{{function_node __inference_serve_549812}}{{node RaggedConcat/concat}}]]
at org.tensorflow.internal.c_api.AbstractTF_Status.throwExceptionIfNotOK(AbstractTF_Status.java:87)
at org.tensorflow.Session.run(Session.java:850)
at org.tensorflow.Session.access$300(Session.java:82)
at org.tensorflow.Session$Runner.runHelper(Session.java:552)
at org.tensorflow.Session$Runner.runNoInit(Session.java:499)
at org.tensorflow.Session$Runner.run(Session.java:495)

Another thing which is really puzzling is that per definiteion NDArrays.VectorOfObjects will create ndarray of rank 1 which is Vector but error that’s thrown shows shape of first input Shape[0] as [1,1] , although this node is ‘inference_serve_549812’ is internal node. Now even more bizarre behavior it was working before, i suddenly started seeing this issue lately, one thing that maybe a problem is that my variables file ‘variables.data-00000-of-00001’ shows word association on mac - i may have clicked on it accidentally before, but i did change association to ‘finder’ so it doesn’t try to open the file using word, plus i tried copying the file again to project - this should technically not be an issue as if there was an issue reading this file there maybe some other errors.

Also when i step through debugger i can see raw tensor at RunHelper stage and it seems to be of right shape DT_STRING tensor with shape [1] - i am not sure how its turning into [1,1]

i figured out problem , our builds get dependencies upgraded to latest versions automatically - some PR is auto generated and we probably missed it, the tensorflow core platform version went from 0.4.0 to 0.4.1 and that broke tokenizer graph operation functionality using SavedModelBundle.