Autoencoder. latent space has no inbound connections

Hi, I got the following simple Autoencoder

class SimpleAE(tf.keras.Model):
def init(self, latent_dim, bypass = False, trainable=True, **kwargs):
self.latent_dim = latent_dim
self.bypass = bypass
self.trainable = trainable
self.quantizer = None
self.built = False

def get_config(self):
    config = super(SimpleAE,self).get_config().copy()
    config.update({'latent_dim':self.latent_dim, 'bypass':self.bypass, 'trainable':self.trainable, 
    return config

def build(self,input_shape): 
    # print(input_shape)
    self.inputlayer = tf.keras.layers.InputLayer(input_shape=(input_shape[-1],))  # Initialize input layer
    self.encoder = tf.keras.layers.Dense(self.latent_dim, activation='linear', name="latentspace")
    self.decoder = tf.keras.layers.Dense(input_shape[-1], activation='linear')
    self.built = True
def call(self,x):
    if not self.built:  # Ensure the model is built before calling
    # print("Test",flush=True)
    if self.bypass is False:
        xin = self.inputlayer(x)
        encoded = self.encoder(xin)
        # print(encoded)
        decoded = self.decoder(encoded)
        return decoded
        return x

This Autoencoder, actually four instances of it, is part of a larger model. I would like to get the output of the latent space to process it further in a second step. I want to do this by doing

submodel = tf.keras.Model(inputs=[model.input], outputs=[model.get_layer(“AE_Encoder_left”).layers[1].output])

The model was pretrained by me in a separate step, and before this line I load and compile the model and check whether the model evaluates correctly (i.e. i do model.evaluate(…)), which is the case. The weights of the SimpleAE are set correctly, I see all the necessary weights and biases. HOWEVER, I always get the error

*** AttributeError: Layer latentspace has no inbound nodes.

when I try to get the output. I tried many different things now, but to no avail. I actually believe this is a bug. Do you know about another way to obtain the output of the latent space? I want to train a seperate model first using the data to then plug this new model into the whole model after training. I could of course save and store the output of the latent space during execution inside the call method, but this is rather annoying, so I would prefer my current approach.
Does anyone have an idea what might cause this problem? I even did model(SingleSample) before trying to access the output to make sure it was built, but I still received that error.

Ok, I continued to search a lot for a solution and apparently it cannot be done this way. Whether it’s a bug or intended behavior was not clear from the comments I found. One work around would be to store the output in the call function in some variable and to access that. However, I do not like this solution as it unnecessarily fills up my model. What I did to solve it is to just split up the Autoencoder model into two separate parts, the encoder and the decoder, obviously, and to get the output of the encoder model, which then is the latent space. This worked as expected.