28x28 image is still 28x28 after applying 7x7 kernel?

How is that possible?
input image 28, 28, 1 x 60,000
Applied conv2d filter of 64,7 (64 filters, 7x7 kernel)
if I print out the model’s first layer output is still 28x28

keras.layers.Conv2D(64, 7, activation=“relu”, padding=“same”, input_shape=[28, 28, 1]),
keras.layers.MaxPooling2D(2), \

X_train_full.shape:  (60000, 28, 28, 1)
X_train_full.dtype:  uint8
X_test shape:  (10000, 28, 28)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
Model: "sequential"
Layer (type)                 Output Shape              Param #
conv2d (Conv2D)              (None, 28, 28, 64)        3200

This is because you have specified padding="same", which results in the output having the same “size” (height & width dimensions) as the input.

You can find additional details here: tf.keras.layers.Conv2D  |  TensorFlow Core v2.8.0

1 Like