Hurricane Damage Recognition - Classification Fails?

I have developed CNN models for simple image recognition before. This problem is for recognising hurricane damage from satellite imagery. I have created a CNN model to identify elements of the image.

The loss and accuracy results show that the model is not learning at all i.e. the binary classification accuracy is steady at 0.5.

Please have a look. Here are links to my Colab notebook and the data file.

Thanks!

1 Like

Hi @brendonwp . Did you try out to apply filters to your images before running your model to enhance properties?

No, I didn’t. But I’ll add more convolutional layers into the model before the dense layer. That should be equivalent

There seems to be an issue with mapping the dataset.


train_datagen = ImageDataGenerator(rescale=1. / 255)
valid_datagen = ImageDataGenerator(rescale=1. / 255)
train = train_datagen.flow_from_directory("train", target_size=(IMG_SIZE, IMG_SIZE), batch_size=BATCH_SIZE, class_mode="binary")
valid = train_datagen.flow_from_directory("validation", target_size=(IMG_SIZE, IMG_SIZE), batch_size=BATCH_SIZE, class_mode="binary")

model.fit(train, epochs=15, verbose=1,  validation_data=valid)

This gives me a validation accuracy of ~0.9

Thanks! It’s running much better now. Still have to go back and debug the previous code better as I’d like to use the TF dataset structure in future.

I’m getting a training accuracy of 0.78 on the third epoch, and a validation accuracy of 0.85. Any idea how this could be?

PS I have to run TF on my CPU, and it’s sloooooow…

I would say it just luck or bad luck, however you want to call it. Adding BatchNormalization would probably stabilize it a little bit.
You could use the GPU available in Google Colab or use Kaggle (you get 40 gpu hours / week if you register your phone number).

And yes, you can of course use the tf dataset structure, but i didn’t really feel like searching for a bug there after the keras ImageDataGenerator worked, it is a bit slower but easier to read and debug.

1 Like

After 15 epochs the training accuracy was well above 90%, and validation was still around 85% after having dipped for a while. My guess is that the earlier validation accuracy was just a fluke

This is the best way to load image data.