tf.keras.layers.Conv2D( filters, kernel_size, strides=(1, 1), padding='valid', data_format=None, dilation_rate=(1, 1), groups=1, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)
followed by some example:
# The inputs are 28x28 RGB images with `channels_last` and the batch # size is 4. input_shape = (4, 28, 28, 3) x = tf.random.normal(input_shape) y = tf.keras.layers.Conv2D( 2, 3, activation='relu', input_shape=input_shape[1:])(x) print(y.shape) so I am assuming 2 filters, 3x3 kernel size etc. Now the book I have shows the example specified large number of filters: 128, 64 etc, is it really accurate? https://www.machinecurve.com/index.php/2020/03/30/how-to-use-conv2d-with-keras/ Secondly, I am also "porting" doing pytorch equivalent but pytorch's conv2d API has no mentions of filters, only significant params are: channels in / out and kernel size: https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html#torch.nn.Dropout So how one specifies the filter in pytorch?