A `Concatenate` layer requires inputs with matching shapes except for the concatenation axis. Received: input_shape=[(None, 7, 7, 512), (None, 2)]

I have 2 models that I want to implement early-fusion with and to do this, I need to first concatenate their features. However, I get the error in the title at the Concatenate part and, even after searching in other questions/forums, I can’t seem to find how I can fix it. Are there any recommendations you can give me? Unfortunately I can’t share the image data nor give access to it.

The 1st model is a Places-365, available https://github.com/GKalliatakis/Keras-VGG16-places365 :

def VGG16_Places365(weights='places',
                   input_shape=None,
                   pooling=None,
                   classes=365):


   img_input = Input(shape=input_shape)


   # Block 1
   x = Conv2D(filters=64, kernel_size=3, strides=(1, 1), padding='same',
              kernel_regularizer=l2(0.0002),
              activation='relu', name='block1_conv1_365')(img_input)

   x = Conv2D(filters=64, kernel_size=3, strides=(1, 1), padding='same',
              kernel_regularizer=l2(0.0002),
              activation='relu', name='block1_conv2_365')(x)

   x = MaxPooling2D(pool_size=(2, 2), strides=(2, 2), name="block1_pool_365", padding='valid')(x)

   # Block 2
   x = Conv2D(filters=128, kernel_size=3, strides=(1, 1), padding='same',
              kernel_regularizer=l2(0.0002),
              activation='relu', name='block2_conv1_365')(x)

   x = Conv2D(filters=128, kernel_size=3, strides=(1, 1), padding='same',
              kernel_regularizer=l2(0.0002),
              activation='relu', name='block2_conv2_365')(x)

   x = MaxPooling2D(pool_size=(2, 2), strides=(2, 2), name="block2_pool_365", padding='valid')(x)

   # Block 3
   x = Conv2D(filters=256, kernel_size=3, strides=(1, 1), padding='same',
              kernel_regularizer=l2(0.0002),
              activation='relu', name='block3_conv1_365')(x)

   x = Conv2D(filters=256, kernel_size=3, strides=(1, 1), padding='same',
              kernel_regularizer=l2(0.0002),
              activation='relu', name='block3_conv2_365')(x)

   x = Conv2D(filters=256, kernel_size=3, strides=(1, 1), padding='same',
              kernel_regularizer=l2(0.0002),
              activation='relu', name='block3_conv3_365')(x)

   x = MaxPooling2D(pool_size=(2, 2), strides=(2, 2), name="block3_pool_365", padding='valid')(x)

   # Block 4
   x = Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same',
              kernel_regularizer=l2(0.0002),
              activation='relu', name='block4_conv1_365')(x)

   x = Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same',
              kernel_regularizer=l2(0.0002),
              activation='relu', name='block4_conv2_365')(x)

   x = Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same',
              kernel_regularizer=l2(0.0002),
              activation='relu', name='block4_conv3_365')(x)

   x = MaxPooling2D(pool_size=(2, 2), strides=(2, 2), name="block4_pool_365", padding='valid')(x)

   # Block 5
   x = Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same',
              kernel_regularizer=l2(0.0002),
              activation='relu', name='block5_conv1_365')(x)

   x = Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same',
              kernel_regularizer=l2(0.0002),
              activation='relu', name='block5_conv2_365')(x)

   x = Conv2D(filters=512, kernel_size=3, strides=(1, 1), padding='same',
              kernel_regularizer=l2(0.0002),
              activation='relu', name='block5_conv3_365')(x)

   x = MaxPooling2D(pool_size=(2, 2), strides=(2, 2), name="block5_pool_365", padding='valid')(x)

   inputs = img_input

   # Create model.
   model = Model(inputs, x, name='vgg16-places365')

   # load weights
   weights_path = get_file('vgg16-places365_weights_tf_dim_ordering_tf_kernels_notop.h5',
                               WEIGHTS_PATH_NO_TOP,
                               cache_subdir='models')

   model.load_weights(weights_path)

   return model 

The 2nd model is a VGG19 trained model with 224x224 RGB images, which is trained, saved and later on, accessed. This is how I built and designed it.

models_input_shape = (224, 224, 3)

num_classes = len(pd.unique(train_dataset['T1']))

base_model = VGG19(weights='imagenet', include_top=False, input_shape=models_input_shape)

for layer in base_model.layers:
    layer.trainable = False


model = Sequential()
model.add(base_model)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(num_classes, activation='sigmoid'))
model.compile(loss=losses.BinaryCrossentropy(),
                optimizer=Adam(learning_rate=0.0001),
                metrics=
                 ['accuracy',
                  Precision(),
                  Recall()])
epochs = 2
batch_size = 32
steps_per_epoch = train_generator.n // train_generator.batch_size
  #validation_steps = valid_generator.n // batch_size

history = model.fit(
      train_generator,
      steps_per_epoch=steps_per_epoch,
      epochs=epochs
      # ,validation_data=valid_generator,
      # validation_steps=validation_steps
  )

What I used to call the models and obtain their features and finally, get the error concatenating them:

model_vgg16_places365 = VGG16_Places365(weights='places', input_shape=(224, 224, 3))
model_365_features = model_vgg16_places365(Input(shape=(224, 224 ,3)))
vgg19_model_location = 'vgg19_trained.keras'
vgg19_model = load_model(vgg19_model_location)
vgg19_model_features = vgg19_model(Input(shape=(224,224,3)))
merged_features = Concatenate()([model_365_features, vgg19_model_features])

Hi @Javier_Romero, The tf.keras.layers.Concatenate takes input a list of tensors, all of the same shape except for the concatenation axis. By default axis will be -1 for the concatenation layer so except for the last dimension please make sure that the remaining dimensions shape are equal. Thank You.

1 Like