Question: Larger Images Size for GAN

I found this deep convolutional GAN example at https://www.geeksforgeeks.org/deep-convolutional-gan-with-keras/, and I wanted to modify it slightly. I have these grayscale images that are 100x100 pixels, but this example is for grayscale images that are 28x28 pixels. Unfortunately I can’t downsize these 100x100 pixels to 28x28 pixels as information would be lost. So I just need to modify the code to take in 100x100 pixel images.

I already wrote the code that loads in my training and testing images in the same format as they used in that example, but modified such that the images are 100x100. But I don’t know how to change the rest of the code. I’m assuming I need to modify the generator and the discriminator somehow, but I don’t know how and don’t understand the syntax. I naively just went through the code and swapped out 28 everywhere I saw it for 100, but that didn’t seem to work and just resulted in errors. Below is the code for the generator and discriminator. How do I modify it to accept 100x100 pixel images?,

# code
num_features = 100
 
generator = keras.models.Sequential([
    keras.layers.Dense(7 * 7 * 128, input_shape =[num_features]),
    keras.layers.Reshape([7, 7, 128]),
    keras.layers.BatchNormalization(),
    keras.layers.Conv2DTranspose(
        64, (5, 5), (2, 2), padding ="same", activation ="selu"),
    keras.layers.BatchNormalization(),
    keras.layers.Conv2DTranspose(
        1, (5, 5), (2, 2), padding ="same", activation ="tanh"),
])
generator.summary()
discriminator = keras.models.Sequential([
    keras.layers.Conv2D(64, (5, 5), (2, 2), padding ="same", input_shape =[28, 28, 1]),
    keras.layers.LeakyReLU(0.2),
    keras.layers.Dropout(0.3),
    keras.layers.Conv2D(128, (5, 5), (2, 2), padding ="same"),
    keras.layers.LeakyReLU(0.2),
    keras.layers.Dropout(0.3),
    keras.layers.Flatten(),
    keras.layers.Dense(1, activation ='sigmoid')
])
discriminator.summary()

Hi @djr547, To execute the code on the 100*100 input shape in the discriminator you have to assign the input_shape=[100,100,1], in the generator you have to change the generator model architecture so that the out put of generator has the shape (100,100,1). Could you please try with the below given code

generator = tf.keras.models.Sequential([
    tf.keras.layers.Dense(25 * 25 * 512, input_shape=[num_features]),
    tf.keras.layers.Reshape([25, 25, 512]),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Conv2DTranspose(256, (5, 5), (2, 2), padding="same", activation="selu"),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Conv2DTranspose(128, (5, 5), (2, 2), padding="same", activation="selu"),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Conv2DTranspose(64, (5, 5), (2, 2), padding="same", activation="selu"),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Conv2DTranspose(1, (5, 5), (2, 2), padding="same", activation="tanh"),
    tf.keras.layers.Resizing(100, 100)
])

Thank You.