mAP is -1 for custom object detection

What does this result mean?
I have tried to follow the raccoon dataset tutorial as it is. Instead, of giving positive % accuracy it is giving me -1 answers for evaluation with SSD mobilenet using Tensorflow 1.15.

image


I also tried to train the network with TF2.
I am getting this result after training on TensorFlow object detection API 2.x on a custom dataset.
TF2: SSD MobileNet v2 320x320

Also, Can you please suggest a script modification in model_main_tf2.py to get evaluation results exactly like model_main.py?

Can you please share pipeline.config ?

#Code source is referred from TensorFlows Configuration file:https://github.com/tensorflow/models/tree/master/research/object_detection/samples/configs

model {
  ssd {
    num_classes: 1
    image_resizer {
      fixed_shape_resizer {
        height: 300
        width: 300
      }
    }
    feature_extractor {
      type: "ssd_mobilenet_v1"
      depth_multiplier: 1.0
      min_depth: 16
      conv_hyperparams {
        regularizer {
          l2_regularizer {
            weight: 3.99999989895e-05
          }
        }
        initializer {
          truncated_normal_initializer {
            mean: 0.0
            stddev: 0.0299999993294
          }
        }
        activation: RELU_6
        batch_norm {
          decay: 0.999700009823
          center: true
          scale: true
          epsilon: 0.0010000000475
          train: true
        }
      }
    }
    box_coder {
      faster_rcnn_box_coder {
        y_scale: 10.0
        x_scale: 10.0
        height_scale: 5.0
        width_scale: 5.0
      }
    }
    matcher {
      argmax_matcher {
        matched_threshold: 0.5
        unmatched_threshold: 0.5
        ignore_thresholds: false
        negatives_lower_than_unmatched: true
        force_match_for_each_row: true
      }
    }
    similarity_calculator {
      iou_similarity {
      }
    }
    box_predictor {
      convolutional_box_predictor {
        conv_hyperparams {
          regularizer {
            l2_regularizer {
              weight: 3.99999989895e-05
            }
          }
          initializer {
            truncated_normal_initializer {
              mean: 0.0
              stddev: 0.0299999993294
            }
          }
          activation: RELU_6
          batch_norm {
            decay: 0.999700009823
            center: true
            scale: true
            epsilon: 0.0010000000475
            train: true
          }
        }
        min_depth: 0
        max_depth: 0
        num_layers_before_predictor: 0
        use_dropout: false
        dropout_keep_probability: 0.800000011921
        kernel_size: 1
        box_code_size: 4
        apply_sigmoid_to_scores: false
      }
    }
    anchor_generator {
      ssd_anchor_generator {
        num_layers: 6
        min_scale: 0.20000000298
        max_scale: 0.949999988079
        aspect_ratios: 1.0
        aspect_ratios: 2.0
        aspect_ratios: 0.5
        aspect_ratios: 3.0
        aspect_ratios: 0.333299994469
      }
    }
    post_processing {
      batch_non_max_suppression {
        score_threshold: 0.300000011921
        iou_threshold: 0.600000023842
        max_detections_per_class: 100
        max_total_detections: 100
      }
      score_converter: SIGMOID
    }
    normalize_loss_by_num_matches: true
    loss {
      localization_loss {
        weighted_smooth_l1 {
        }
      }
      classification_loss {
        weighted_sigmoid {
        }
      }
      hard_example_miner {
        num_hard_examples: 3000
        iou_threshold: 0.990000009537
        loss_type: CLASSIFICATION
        max_negatives_per_positive: 3
        min_negatives_per_image: 0
      }
      classification_weight: 1.0
      localization_weight: 1.0
    }
  }
}
train_config {
  batch_size: 1
  batch_queue_capacity: 100
  num_batch_queue_threads: 8
  prefetch_queue_capacity: 10
  
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  data_augmentation_options {
    ssd_random_crop {
    }
  }
  optimizer {
    rms_prop_optimizer {
      learning_rate {
        exponential_decay_learning_rate {
          initial_learning_rate: 0.00400000018999
          decay_steps: 800720
          decay_factor: 0.949999988079
        }
      }
      momentum_optimizer_value: 0.899999976158
      decay: 0.899999976158
      epsilon: 1.0
    }
  }
  fine_tune_checkpoint: "/content/gdrive/ssd_mobilenet_v1_coco_11_06_2017/model.ckpt"
  from_detection_checkpoint: true
  num_steps: 50000
}
train_input_reader {
  shuffle_buffer_size: 200
  label_map_path: "/content/gdrive/gsv/label_map.pbtxt"
  tf_record_input_reader {
    input_path: "/content/gdrive/gsv/train_xml.record"
  }
}
eval_config {
  num_examples: 400
  max_evals: 10
  use_moving_averages: false
}
eval_input_reader {
  label_map_path: "/content/gdrive/gsv/label_map.pbtxt"
  shuffle: false
  num_readers: 1
  tf_record_input_reader {
  input_path: "/content/gdrive/gsv/test_xml.record"
  }
}
num_classes: 1

Do you only have one class in your training data set?

Please share the label_map.pbtxt file? Thank you!

Yes, I am only training for one class.

item {
	id: 1
	name: '--------'
}

ps: don’t want to disclose the name as I am using the company’s dataset with NDA.

Any help with evaluation error is most helpful. Thank you!

Hi @Sayali_22 ,

Can you check this notebook which is working prototype of the model which you are using with custom dataset with all related configurations. Please try to replace your dataset here and try and let me know if this is working fine.

Please make sure that label_map.txt if its correctly matching with annoations.json while creating tfrecords.

Thanks.

Thank you so much@Laxma_Reddy_Patlolla,
Can you please guide me futher on how to make the annotations.coco.jason file from below?

I have annotations in .txt file converted to .csv file then .tfrecord

also while training same dataset on YOLO I have folder structure like this by following this tutorial

Hi @Sayali_22 ,

1.Could you please check this coco dataset format and prepare dataset in json format.
2.Check following dataset structure and other examples present. Then make sure you have same structure as below before generating tfrecords.
3.You can check in the above notebook which has the same dataset use for training.

Thanks.

Thank you so much for this notebook but it has multiple issues with google colab and compatibility (I can’t emphasis more on how many errors I tried resolving).

Even if I just run this as it is without my dataset custmisation, it is not working.

Please suggest any other workaround.Thank you : )

I was able to run the whole notebook as it is. Just that it took around 10 mins to complete as it has 3000 steps to complete.