Tf.unstack: Cannot infer argument `num` from shape (None, None, 8)

Hi, I am putting together a model, which consists of a base model and an additional submodel which I am adding (see also my other threads). The base model uses the call method solely to build itself:

 def call(self):
            input_left  = tf.keras.Input(shape = (None,), name = "Input_left")
            input_right = tf.keras.Input(shape = (None,), name = "Input_right")

            enc_inp_l = self.encoder_left(input_left)
            enc_inp_r = self.encoder_right(input_right)

         

                # AE Encoder left side -> right side
                enc_inp_l_quantized = self.ssrae_enc_left(enc_inp_l)
                # AE Encoder right side -> left side
                enc_inp_r_quantized = self.ssrae_enc_right(enc_inp_r)
...


        
            model = tf.keras.Model(inputs = [input_left, input_right], 
                                    outputs = [out_left , out_right], name = self.model_name)

...
            
            return model

So the model is constructed with variable input sizes. Now, my new submodel, in the above part it is for example self.ssrae_enc_left, is the issue raising the error. The reason is the call method of the recurrent autoencoder I described in other threads:
The problem is in

  def call(self, x, return_quantized=False):
        if not self.bypass:
            state = tf.zeros(shape=(tf.shape(x)[0], tf.shape(x)[2] * self.ht))

            x_unstacked = tf.unstack(x, axis=1)

            output_list = [None] * len(x_unstacked)
            for i, inputs in enumerate(x_unstacked):
                encoded = self.encoder(inputs, state)
                encoded_q = self.quantizer(encoded)
                decoded, state = self.decoder(encoded_q, state)
                output_list[i] = decoded

            outputs = tf.stack(output_list, axis=1)

            if return_quantized:
                return outputs, encoded_q
            else:
                return outputs

        else:
            return x

Precisely, the line

x_unstacked = tf.unstack(x, axis=1)

raises the error

        x_unstacked = tf.unstack(x, axis=1)

    ValueError: Cannot infer argument `num` from shape (None, None, 8)


Call arguments received by layer 'frae' (type FRAE):
  • x=tf.Tensor(shape=(None, None, 8), dtype=float32)
  • return_quantized=False

Call arguments received by layer “ssrae” (type SSRAE):
• x=tf.Tensor(shape=(None, None, 64), dtype=float32)

So, I understand the issue in a way. Because the second dimension is unknown due to how the BaseModel is constructed (which does not use call(x)) the second dimension cannot be infered in my recurrent model.

The BaseModel is constructed using something like

Model(args).call()

How can I make this work with my recurrent autoencoder? I could set “num=X” in tf.unstack, but this is kind of hacky and appears to take very long to load (X is fixed, but rather large).

Hi @Cola_Lightyear, The error is due to passing unknown dimensions data to the tf.unstack. The tf.unstack requires the shape of the input tensor to be fully known. Thank You.