Tf.concat raise error and i don't know how to solve it. any help?

can anyone explain me why sigma has shape 2 instead of one here?
tf.concat gives always error

def normal_sp(params): 

        return tfd.Normal(loc=params[:,0:1], scale=1e-3 + tf.math.softplus(0.005*  params[:,1:2]))

class subq0(tf.keras.Model):

    def __init__(self):


        self.mlp2 = tf.keras.Sequential()





        self.distrlambda= tfp.layers.DistributionLambda(normal_sp, name='normal')

    def call(self, inputs):

        sigma= self.mlp2(inputs)

        params_mc= tf.concat([sigma,1], axis=0)

        dist_mc =self.distrlambda(params_mc) 

        return dist_mc

polynom = subq0()

optimizer = tf.optimizers.SGD(learning_rate=0.0001,momentum=0.9)

polynom.compile(optimizer=optimizer,  loss= NLL  ),))

tf.keras.utils.plot_model(polynom, "test.png", show_shapes=True)

history_polynom =, y_train , epochs= 100, verbose=0, batch_size=100 ,validation_data=(X_val,y_val) )

the error is

ValueError: Shape must be rank 2 but is rank 1 for ‘{{node concat}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32](sequential_32/dense_74/BiasAdd, ones, concat/axis)’ with input shapes: [1,1], [1], [].

Hi @P11,As dense layers are fully-connected,the 1-dimensional input undergoes matrix multiplication with the weights resulting in 2-dimensional output. For more details please refer to this document. Thank You.

but there is only one neuron in the last layer of mlp2, shouldn’t i get a 1 dimensional output at the end? the bias is summed to the dot product, no?

Hi @P11, the last layer has weights of shape (20 , 1) because the previous layer has 20 neurons where each neuron is connected to the 1 neuron of the last layer which is rank 2. when matrix multiplication happens the resultant will be rank 2. For more details please refer to this gist. Thank You,