InvalidArgumentErrorr and Shapes when doing the inference

Hi everyone,

I am creating a new post as this error just poped up in my code, and I am a bit clueless (again).

for reference, my inference function can be found here : Multi-GPU inference - am I doing it right? - #5 by FloFive

So, the code does predict a segmented volume based on tomographic (grayscale) 3D data. The function works fine for the following sizes (voxels): 128, 256, 512,… up to 1024, but when I try using a 1728vx, it does throw this : InvalidArgumentError: Shapes of all inputs must match: values[0].shape = [2,64,64,64] != values[1].shape = [1,64,64,64] [Op:Pack] name: packed

the line that gets involved is this one: batch_array = tf.identity(tensors_tuple)

I can’t really understand why none of the previous sizes gets me this error, but 1728 does. Anyone could have a clue maybe?

thanks a lot !

Hi @FloFive, Could you please provide the standalone code to reproduce the issue. Thank You.

Unfortunately I cannot provide you with the full code as I am under heavy restriction.

What I can do though is giving you the code that reads my 3D data and patch it in many subvolumes:

path_pred = path_folder + "some link"
image_pred = io.imread(path_pred + 'image.tif')
#mask_pred = io.imread(path_pred + 'mask.tif')

scaler = load(path_folder + 'runs/pangea3/' + folder_name + '/std_scaler_image.bin') 
image_pred = np.float32(scaler.fit_transform(image_pred.reshape(-1, image_pred.shape[-1])).reshape(image_pred.shape))

patches = patchify(image_pred, (number_patchify, number_patchify, number_patchify), step=number_patchify)  

the patches variable is then read in the code I’ve provided in my other post; the shape is (27,27,27,64,64,64) for a 1728px size length cube.

the model loaded for the prediction is a regular CNN, inspired from this one : python_for_image_processing_APEER/tutorial122_3D_Unet.ipynb at master · bnsreenu/python_for_image_processing_APEER · GitHub

Sorry I can’t give you the full extend of the code :frowning: