Understanding tf.keras.layers.Dense()

I am trying to understand why there is a difference between calculating a dense layer operation directly and using the keras implementation.

Following the documentation (tf.keras.layers.Dense  |  TensorFlow v2.14.0) tf.keras.layers.Dense() should implement the operation output = activation(dot(input, kernel) + bias) but result and result1 below are not the same.


b = tf.Variable(tf.random.uniform(shape=(5,1)), dtype=tf.float32)
kernel = tf.Variable(tf.random.uniform(shape=(5,10)), dtype=tf.float32)
x = tf.constant(tf.random.uniform(shape=(10,1), dtype=tf.float32))

result = tf.nn.relu(tf.linalg.matmul(a=kernel, b=x) + b)

test = tf.keras.layers.Dense(units = 5, 
                            activation = 'relu',
                            use_bias = True, 
                            kernel_initializer = tf.keras.initializers.Constant(value=kernel), 
                            bias_initializer = tf.keras.initializers.Constant(value=b), 

result1 = test(tf.transpose(x))




[[2.38769 3.63470697 2.62423944 3.31286287 2.91121125]]

Using test.get_weights() I can see that the kernel and bias (b) are getting set to the correct values. I am using TF version 2.12.0.

1 Like

I am not an expert, but my thoughts are that there is more going on in the layer and there could be some kind of optimization for better precision in float calculations.