New to tensorflow, I’m a french digital artist that recently started to work on/with AI !
I’m trying to train a DCGAN from a small batch of images I created using Stable Diffusion.
I followed the DCGAN tutorial on tensorflow.org and customized it to use my dataset. I have 40 images and I know that is not enough to produce a robust model, but I just wanted to see if I could make it work on a small scale first. Plus, my goal is not to have a really viable model, just to experiment and try to do a little bit of interpolation animation with it.
I can get my model to train, but it seems to repeatedly collapse. Even after 500 epochs, I only get a yellowish noise. It sometimes even look like it’s starting to produce something, but then it becomes all yellow again. Here is a gif of one of my last training: https://drive.google.com/file/d/198MxkCAHFu0-4ZGbaTX_cm-7hN2HcuLw/view?usp=share_link
My question is: am I doomed to fail with such a small dataset, or is it a question of badly setup discriminator? Is there a way I can still get some results without resorting to producing hundred of thousands of images ?
My code is here: https://colab.research.google.com/drive/1QDJqKDIcfP2msliQqf4Yh584-Zj78FgU