Diferent amount of model-inputs for training and validation in RNN-State-Space-Model

im working on a state spcae model. For the training it should take the inputs ((None, 200, 5), (None, 200, 9) and for the validation just the input (None,200,5) so only one input. In the call function the cell then checks if it is run in training or validation and then considers just on or two inputs.
Unfortunately when i try it i get the error

Layer "rnn" expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor: shape=(32, 200, 5), dtype=float32, numpy=

So far i tried to give it a tf.keras.Input as a placeholder for the second input but i got the error

TypeError: You are passing KerasTensor(type_spec=TensorSpec(shape=(), dtype=tf.float32, name=None), name='Placeholder:0', description="created by layer 'tf.cast_2'"), an intermediate Keras symbolic input/output, to a TF API that does not allow registering custom dispatchers, such as `tf.cond`, `tf.function`, gradient tapes, or `tf.map_fn`. Keras Functional model construction only supports TF API calls that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work around this limitation by putting the operation in a custom Keras layer `call` and calling that layer on this symbolic input/output.

do you know how i can do this?

minimum working example:

import keras
import tensorflow as tf
from keras import layers
import numpy as np
import os

class cell(layers.Layer):

    def __init__(self):

        self.state_size = None

    def build(self, input_shape):
        inp_shape = input_shape[0]
        x_shape = input_shape[1]
        self.state_size = x_shape[-1]
        self.A = self.add_weight(name="A", shape=(self.state_size, self.state_size))
        self.B = self.add_weight(name="B", shape=(inp_shape[-1], self.state_size))

        self.built = True

    def call(self, input_at_t, states_at_t, training=True):
        if training:
            inp, x = input_at_t
            dx = tf.matmul(x, self.A) * tf.matmul(inp, self.B)
            dx = tf.clip_by_value(dx, -5, 5)
            return dx, dx
            x = states_at_t[0]
            inp = input_at_t
            dx = tf.matmul(x, self.A) * tf.matmul(inp, self.B)
            x_new = x + dx
            x_new = tf.clip_by_value(x_new, -5, 5)
            return x, [x_new]

class model(keras.Model):
    def __init__(self):

        self.rnn = layers.RNN(cell=cell(), return_sequences=True)
        #self.x_placeholder = tf.keras.Input(shape=(200, 9))

    def call(self, inputs, training=None, mask=None):
        if training:
            return self.rnn(inputs, training=training)
            return self.rnn(inputs[0], initial_state=inputs[1], training=training)

#data generation
inp = np.random.randn(1000, 201, 5)
x0 = np.random.randn(1000, 9)
x = []
dx = []

A = np.random.randn(9,9) * 0.001
B = np.random.randn(5, 9) * 0.001

x_last = x0
for i in range(201):
    dxi = np.matmul(x_last, A) + np.matmul(inp[:,i], B)
    dx.append(dxi[:, np.newaxis])
    x_last = x_last + dxi
    x.append(x_last[:, np.newaxis])

x = np.concatenate([x0[:, np.newaxis]] + x[:-1], axis=1)
dx = np.concatenate(dx, axis=1)
inp = inp[:,1:]
x_init = x[:,0]
x = x[:,1:]
dx = dx[:, 1:]

dataset_train = tf.data.Dataset.from_tensor_slices(((inp[:800], x[:800]), dx[:800])).batch(32)
dataset_val = tf.data.Dataset.from_tensor_slices(((inp[800:], x_init[800:]), x[800:])).batch(32)

model = model()
model.compile("adam", loss="mse", run_eagerly=False)
model.fit(dataset_train, epochs=5, validation_data=dataset_val)

For anybody iterested why i want to do this:

the Statespace is defined as
dx = Ax + Binp
y = x
in the training i want to fit the model just on the known differences dx and for validation i want to evaluate the real time series x

unrelated but isnt it a best practice if you already imported keras to just to layers=keras.layers I am not a python user but see this frequently here

you mean instead of import it with from keras import layers? I dont see the benefit of writing it out tbh

so after a realy long time i finally could figure out a workaround.
Why do Symbolic tensors not work? the model enters a different calling-routine if you give it any symbolic tensor and therefore also returns symbolic tensors which cant be handelt from the compute_loss function.
Why doesnt it work to just give the amount of inputs you want?: Because you define input_specs when building a model and they get checked before every call.
Why didnt it work to just overwrite those inputs_spect before calling the rnn-layer? Because out of any reason if you define the input_spec of a layer before calling it the first time the given input gets places in the first entry of the input_specs making it one longer then wanted and this also changes the original training_input_spec since lists are mutable …
Only workaround i found so far: checking if the rnn is built and only changing the input spec to the desired one if yes… All in all tensorflow is a big mess!!! I’ll change as soon as possible to pytorch.

Here the solution:

class model(keras.Model):
    def __init__(self):
        super(model, self).__init__()

        self.rnn = layers.RNN(cell=cell(), return_sequences=True)

    def build(self, input_shape):
        def get_input_spec(shape):
            """Convert input shape to InputSpec."""
            if isinstance(shape, tf.TensorShape):
                input_spec_shape = shape.as_list()
                input_spec_shape = list(shape)
            batch_index, time_step_index = (1, 0) if self.rnn.time_major else (0, 1)
            if not self.stateful:
                input_spec_shape[batch_index] = None
            input_spec_shape[time_step_index] = None
            return layers.InputSpec(shape=tuple(input_spec_shape))

        self.train_input_spec = [get_input_spec(input_shape_i) for input_shape_i in input_shape]
        self.val_input_spec = [get_input_spec(input_shape[0])]

    def call(self, inputs, training=None, mask=None):
        if training:
            if self.rnn.built:
                self.rnn.input_spec = self.train_input_spec
            return self.rnn(inputs, training=training)
            input = inputs[0]
            initial_state = inputs[1]
            self.rnn.input_spec = self.val_input_spec
            return self.rnn(input, initial_state=initial_state, training=training)

I mean instead of

import keras
from keras import layers

which goes and searches for the same thing again, just do:

import keras
layers = keras.layers