Layer has no inbound nodes

See the edit below!

I am trying to construct a submodel from a larger model. However, for some reason, despite calling the model prior to creating the submodel, I receive the error “Layer has no inbound nodes” when I try to get the output of one of the layers of my larger model.
What I do is

import tensorflow as tf
import numpy as np

class VectorQuantization(tf.keras.layers.Layer):
    def __init__(self, codebook = None, **kwargs):
        super(VectorQuantization, self).__init__(**kwargs)
        self.codebook = codebook

    def SetCodebook(self, codebook):
        self.codebook = codebook

    @tf.custom_gradient
    def call(self, inputs):
        def grad(dy):
            return 1

        if self.codebook:
            # Flatten input tensor
            input_shape = tf.shape(inputs)
            flat_inputs = tf.reshape(inputs, [-1, self.embedding_dim])
            # Reshape codebook for distance calculation
            reshaped_codebook = np.expand_dims(self.codebook, axis=0)
            # Calculate distances between inputs and codebook vectors
            distances = tf.reduce_sum(tf.square(tf.expand_dims(flat_inputs, axis=1) - reshaped_codebook), axis=2)

            # Find the index of the closest centroid for each input
            embedding_indices = tf.argmin(distances, axis=1)

            # Gather closest embeddings from codebook
            quantized = tf.gather(tf.convert_to_tensor(self.codebook), embedding_indices)

            # Reshape quantized tensor to match input shape
            quantized = tf.reshape(quantized, input_shape)


            return quantized, grad
        else:
            return inputs, grad


class FRAEEncoder(tf.keras.layers.Layer):
    def __init__(self, input_shape, latent_dim,   layer_config, **kwargs):
        super(FRAEEncoder, self).__init__(**kwargs)
        self.SetupLayers(input_shape, latent_dim, layer_config)

    def SetupLayers(self, input_shape, latent_dim, layer_config):
        self.encoder = []
        activations  = layer_config["activations"]
        num_neuron   = layer_config["neurons"]
        activations.append("swish")
        num_neuron.append(latent_dim)
        for i, act in enumerate(activations):
            if i == 0:
                self.encoder.append(tf.keras.layers.Dense(num_neuron[i], activation=act, input_shape=(input_shape,)))
            else:
                self.encoder.append(tf.keras.layers.Dense(num_neuron[i], activation=act))


    def call(self, inputs, state):
        # encoder
        encoded = tf.concat([inputs, state], axis=-1)

        for lrs in self.encoder:
            encoded = lrs(encoded)

        new_output = encoded
        return new_output

class FRAEDecoder(tf.keras.layers.Layer):
    def __init__(self,   output_dim, layer_config, **kwargs):
        super(FRAEDecoder, self).__init__(**kwargs)
        self.SetupLayers( output_dim, layer_config)

    def SetupLayers(self,  output_dim, layer_config):
        self.decoder = []
        activations  = layer_config["activations"]
        num_neuron   = layer_config["neurons"]

        for i, act in reversed(list(enumerate(activations))):
            self.decoder.append(tf.keras.layers.Dense(num_neuron[i],activation=act))

        self.decoder.append(tf.keras.layers.Dense(output_dim, activation='linear'))


    def call(self, encoded, state):
        y = tf.concat([encoded, state], axis=1)
        for lrs in self.decoder:
            y = lrs(y)

        #update output and state
        new_output = y  # Result of some operations on `combined`
        new_state  = tf.concat([new_output, state[:, :-tf.shape(new_output)[-1]]], axis=-1)
        return new_output, new_state




class FRAE(tf.keras.Model):
    def __init__(self,  output_dim, latent_dim, ht,  layer_config={"activations":[],"neurons":[]} , **kwargs):
        super(FRAE, self).__init__(**kwargs)
        self.output_dim = output_dim
        self.ht = ht
        self.encoder = FRAEEncoder(output_dim, latent_dim, layer_config, name=self.name+"_Encoder")
        self.decoder = FRAEDecoder(output_dim, layer_config, name=self.name+"_Decoder")
        self.quantizer = VectorQuantization(name=self.name+"_VQ")

    def SetQuantizer(self, codebook):
        self.quantizer.SetQuantizer(codebook)

    def call(self, x):
        state = tf.zeros(shape=(tf.shape(x)[0], tf.shape(x)[2] * self.ht))

        x_unstacked = tf.unstack(x, axis=1)

        output_list = [None] * len(x_unstacked)
        for i, inputs in enumerate(x_unstacked):
            encoded = self.encoder(inputs, state)
            encoded_q = self.quantizer(encoded)
            decoded, state = self.decoder(encoded_q, state)
            output_list[i] = decoded

        outputs = tf.stack(output_list, axis=1)
        return outputs


if __name__ == '__main__':
    output_dim = 8
    latent_dim = 2
    data = np.random.rand(20, 100, output_dim)


    ht = 1
    frae = FRAE(output_dim, latent_dim, ht)

    loss = tf.keras.losses.mse
    frae.build(input_shape=data.shape)

    frae.compile(loss=loss, run_eagerly=False)


    y = frae(data)
    encoder_output = frae.encoder.output
    submodel = tf.keras.Model(inputs=frae.input, outputs=encoder_output)

However, despite explicitly using the input_shape parameter in the FRAEEncoder class, I receive the error message

s = frae.encoder.output
raise AttributeError(
AttributeError: Layer frae_Encoder has no inbound nodes.

How do I solve this? As far as I know after calling the model with some data the connections should be set.

edit: What I actually want to do is to store the outputs of the VectorQuantization layer when I feed data to the input of the FRAE model. What is the best way to store this output? I used simple lists before to which I insert/append in the call method of the VectorQuantization layer, but that adds some overhead which seems annoying.

[Google DeepMind Assist]

The error you’re encountering is due to trying to access the output property of a layer that has not been connected in a functional API manner. In your code, frae.encoder is an instance of FRAEEncoder, which is a subclass of tf.keras.layers.Layer and not a tf.keras.Model. Therefore, it does not have the output property that you would expect from a model built using the functional API.

To capture the output of the VectorQuantization layer, you can modify your FRAE model to return not only the final output but also the intermediate output from the VectorQuantization layer. Here’s how you can do it:

  1. Modify the call method of the FRAE model to return both the final output and the quantized output.
class FRAE(tf.keras.Model):
    # ... (other parts of the class remain unchanged)

    def call(self, x, return_quantized=False):
        # ... (the rest of the call method remains unchanged)

        if return_quantized:
            return outputs, encoded_q
        else:
            return outputs
  1. When you want to get the quantized output, call the model with the return_quantized flag set to True.
if __name__ == '__main__':
    # ... (other parts of the code remain unchanged)

    y, quantized_output = frae(data, return_quantized=True)

This way, you can get both the final output and the intermediate quantized output without having to create a submodel. The quantized_output will contain the output of the VectorQuantization layer for each time step.

If you want to store the quantized output for each batch during training, you can modify your training loop to capture this output and store it in a list or any other data structure of your choice.

Remember that if you’re using fit method for training, you won’t be able to directly capture intermediate outputs this way. In that case, you would need to use a custom training loop or a callback to access intermediate layer outputs.

Great, thank you so much (again :D))! Albeit I do not quite understand why I am returned the entire history of the quantizer outputs. I still look at the code with a little bit of C++ eyes, and to me the code should only return the most recent quantizer output, but it does not, it returns everything (which is great).

edit:
One quick additional question: Your solution works really fine, but I might require using a batch size during inference, hence I might eventually have to use predict(). The reason is, that my FRAE will be integrated in a larger model (see the other thread), which might require using smaller batches to not run out of memory. I do not see an option to pass additional parameters, so am I correct in thinking I cannot return the quantizer outputs this way when using predict()?