Using TFOD for defect detection

Hi all,

I’m trying to use TFOD with the pretrained zoo models for an application of defect detection. In essence, I’m trying to use high-resolution satellite-type imagery to detect certain known and common defects. A couple of tricky things I’m seeing are:

  1. The images are high resolution, so when they’re resized only small fractions of the information are kept, meaning some of the smaller things I’m looking for are lost in compression. To get around this, I divide the images into N rectangles of even spaces and pass each one individually (N is not set, I’m playing around to optimize it). In doing so I keep almost all of the original information and use an aspect ratio that is almost the same as the original images. Intuitively I would think this would do the trick, but I get an enormous regularization loss (using a resized SSD MobileNet V1 FPN model). When I do the image division described in 1, my results are worse by a long shot. I think this is a result of overfitting, which is why the reg. loss becomes so large.

  2. My “defects” are not always the same. Say I’m looking for defect 1, it could be 1/4 of the total picture, 1/2 the total picture, sometimes the whole picture, etc… Furthermore the defects may be different colors or not always look exactly the same, although there are always patterns that exist that machine is usually able to detect, I would just like to make it better. An example of this is trees that overhang roadways. The detector should be able to find the tree regardless of the type of tree, type of road (city, urban, paved, parking lot, etc,…) or the color of the leaves.

I know this is a general question, but I thought I’d post it here in case anyone has had experience with something similar and is able to share some insight or make any suggestions. Any tips/tricks are appreciated!!

Thank you,
Derek

Hi @Derek_Boase ,

Here are some points I’ve gathered after reviewing the information you provided:

  1. For the issue of the defects being variable in size and appearance, use Faster R-CNN, RetinaNet, or YOLO (which can detect objects at different scales and locations within an image).

  2. You could also try using data augmentation techniques, such as flipping, rotating, or scaling the images.

  3. One option to improve the performance of your high-resolution satellite imagery is to use pre-trained models such as SpaceNet and DeepGlobe, which are specifically designed for this type of data and may provide better results.

I hope this will help you!

Thanks.

Hi @Laxma_Reddy_Patlolla,

Thank you for the reply and the details. I’m going to give these models a try. Currently I’m using the pre-trained model zoo models and I didn’t see a YOLO option. One thing that I’m curious about is whether the pre-trained models are better than training from scratch given that what I’m looking for (sticks, trees, ponds, construction equipment) is all so different from what is in the COCO dataset. Any thoughts.

Thank you!
DB