To print dataset after each layer? possible

I have following model defined (below) and using fit to train. But during training, I would like to print out the dataset or at least its shape/dimensions for troubleshooting purpose. Is it possble?

model=keras.models.Sequential([\
#    keras.layers.Conv2D(64, 7, activation="relu", padding="same", input_shape=[28, 28, 1]),\
    keras.layers.Conv2D(64, 7, activation="relu", padding="same", input_shape=[28, 28, 1]),\
    keras.layers.MaxPooling2D(2), \
    keras.layers.Conv2D(128, 3, activation="relu", padding="same"), \
    keras.layers.Conv2D(128, 3, activation="relu", padding="same"), \
    keras.layers.MaxPooling2D(2), \
    keras.layers.Conv2D(256, 3, activation="relu", padding="same"), \
    keras.layers.Conv2D(256, 3, activation="relu", padding="same"), \
    keras.layers.MaxPooling2D(2), \
    keras.layers.Flatten(), \
    keras.layers.Dense(128, activation="relu"), \
    keras.layers.Dropout(0.5), \
    keras.layers.Dense(64, activation="relu"), \
    keras.layers.Dropout(0.5), \
    keras.layers.Dense(10, activation="softmax") \
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
history=model.fit(X_train, y_train, epochs=CONFIG_EPOCHS, batch_size=CONFIG_BATCH_SIZE, validation_data=(X_valid, y_valid))

and I am doigng exactly same thing in pytorch and this is how pytorch conveniently allows printing of x between layers:

class MLP(Module):

prepare the dataset

# define model elements

def init(self):
super(MLP, self).init()
self.conv1 = Conv2d(1, 64, 7, padding=“same”)
self.act1 = ReLU()
self.maxpool1 = MaxPool2d(2)

def forward(self, X):
if DEBUG:
print("forward entered: X: ", X.size())

printdbg("X: " + str(X.size()))

X = self.conv1(X)
X = self.act1(X)
printdbg("X, conv1/act1: " + str(X.size()))
X = self.maxpool1(X)
printdbg("X, maxpool1: " + str(X.size()))

X = self.conv2a(X)
X = self.act2a(X)
printdbg("X, conv2a/act2a: " + str(X.size()))
X = self.conv2b(X)
X = self.act2b(X)

def printdbg(msg):
if DEBUG_PRT:
print(msg)

You can visualize the whole graph with the shapes of each layer:
tf.keras.utils.plot_model(model, show_shapes=True, show_dtype=True)

Thanks, that could be possibility but I really want to print the dataset dimension just like it in pytorch. Provided if it is possible at all?

For datasets dimensions printout, following worked pretty good right after training (fit)

for i in model.layers:
        print("------")
#       print(i, "\n", i.input_shape, "\n", i.output_shape, "\n", i.get_weights())
        print(i, "\n", i.input_shape, "\n", i.output_shape, "\n")



860/860 [==============================] - 6s 7ms/step - loss: 0.6708 - accuracy: 0.7530 - val_loss: 0.4993 - val_accuracy: 0.8162
------
<keras.layers.convolutional.Conv2D object at 0x7f87bcdf11d0>
 (None, 28, 28, 1)
 (None, 28, 28, 64)

------
<keras.layers.pooling.MaxPooling2D object at 0x7f869ff17400>
 (None, 28, 28, 64)
 (None, 14, 14, 64)

------
<keras.layers.convolutional.Conv2D object at 0x7f869ff17630>
 (None, 14, 14, 64)
 (None, 14, 14, 128)

------
<keras.layers.convolutional.Conv2D object at 0x7f869ff17b38>
 (None, 14, 14, 128)
 (None, 14, 14, 128)

------
<keras.layers.pooling.MaxPooling2D object at 0x7f869ff45080>
 (None, 14, 14, 128)
 (None, 7, 7, 128)

------
<keras.layers.convolutional.Conv2D object at 0x7f869ff452b0>
 (None, 7, 7, 128)
 (None, 7, 7, 256)

------
<keras.layers.convolutional.Conv2D object at 0x7f869ff457b8>
 (None, 7, 7, 256)
 (None, 7, 7, 256)

------
<keras.layers.pooling.MaxPooling2D object at 0x7f869ff45cc0>
 (None, 7, 7, 256)
 (None, 3, 3, 256)

------
<keras.layers.core.Flatten object at 0x7f869ff45ef0>
 (None, 3, 3, 256)
 (None, 2304)

------
<keras.layers.core.Dense object at 0x7f869fecb0b8>
 (None, 2304)
 (None, 128)

------
<keras.layers.core.Dropout object at 0x7f869fecb3c8>
 (None, 128)
 (None, 128)

------
<keras.layers.core.Dense object at 0x7f869fecb4e0>
 (None, 128)
 (None, 64)

------
<keras.layers.core.Dropout object at 0x7f869fecb828>
 (None, 64)
 (None, 64)

------
<keras.layers.core.Dense object at 0x7f869fecb908>
 (None, 64)
 (None, 10)

In Tensorflow model.summary() gives the output shape/dimension at each layer along with Parameters.

_________________________________________________________________
Model: "sequential_6"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 123, 123, 32)      2432      
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 121, 121, 32)      9248      
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 40, 40, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 38, 38, 32)        9248      
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 36, 36, 32)        9248      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 32)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 10, 10, 32)        9248      
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 8, 8, 32)          9248      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 4, 4, 32)          0         
=================================================================
Total params: 48,672
Trainable params: 48,672
Non-trainable params: 0