Extracting batch_size from model.h5

Hi,

How could I extract batch_size from a loaded model which is trained with batch_size = batch_size?

Regards,
Stijn

You know I am new here and I have a same question please anyone help us.

Hi @Stijno2 I believe the model does not save with the batch_size since it is always inferred. Essentially the model is free to train or predict with any batch size as long as it can fit in memory.

2 Likes

That’s right.

All layers (at least the ones I know) don’t care about the batch size and work on arbitrary batch sizes.

Of course, if you have implemented a custom layer which requires a fixed batch size (doesn’t sound like a good idea, but you never know) you should look at that layer’s input shape. The first entry (which will be None for layers which don’t need a specific batch size) will give you the expected batch size.

You can iterate over all layers of the model and check their input shapes.

Perhaps a more important question. Why do you need to know the batch size with which the model was trained?

Your model shape is typically not static, even if you trained with a certain batch size. Thats also the reason why your batch size is NOT saved within the models shape during the training.

If you print out the summary of your keras model with: *YOURMODEL*.summary()
you would see something like: (None, 256) at one of your dense layer. The first argument is always the batch size and None means: I (tensorflow) will set this depending on the shape of my input.

BUT of course you can make it static (e.g. for A head of time compilation this is necessary). Since the batch size is always the FIRST argument in our shape you can easily extract it with this code:

model_inputs = your_model.inputs
# model_inputs is a list, but normally people dont have more than 1 input node, therefore you will only # also the batch size is the same for every entry.

shape = model_inputs[0].shape.as_list()

# ignore empty argument shapes
if not shape:
    continue
# shapes first element is always the batch size
batch_size = shape[0]
1 Like

Hi Steven.

Sorry for my late reaction… normally I receive an email when I get a reply on the forum :see_no_evil:

I’m making a program that automatically sweeps through different combinations of parameters, then picks the best model. However, I think I’ve made a mistake in my mind as somehow I thought I needed to train the model again…

However, I can simply choose to not train it again, and just avaluate the model with model.predict

Regards,
Stijn

Hi Zlorf,
Sorry for my late reaction.

Thank you for you solution and explanation! :slight_smile:

Regards,
Stijn

Thanks for elaborating.

As you’ve figured out, you can use a different batch size when evaluating a model compared to what you’ve trained the model with.

Note that the batch size used during training can have an impact on the model’s accuracy, so two models which are exactly the same but were trained with different batch sizes can have different results. But in my experience typically the differences are not that big and not in order of “making or breaking” the model.

Make sure that you evaluate your models on a seperate set of test data, not on training data; that way you make sure you’re evaluating real-life performance on data that the model hasn’t seen during training.

This might be overkill; there are some frameworks out there that help you keep track of training experiments and help you decide which hyperparameters are best. I have no experience in those, I think one of the more known ones is called ‘weights and biases’: Weights & Biases – Developer tools for ML

Kind regards and happy New Year!

Steven

1 Like

Hi Steven,

Weigths & Biases is exactly what I’m using for determining the best combination of hyperparameters. It’s really nice!

Thanks, you too!

Cheers,
Stijn