Multiplication of outputs from two Conv2D layers

Hi, I’m trying to implement SOLO architecture for instance segmentation in TensorFlow (Decoupled version).

https://arxiv.org/pdf/1912.04488.pdf

Right now, I need to compute the loss function and multiply each output map from conv2d to each other.

xi = Conv2D(…)(input) # output is (batch, None, None, 24)
yi = Conv2D(…)(input) # output is (batch, None, None, 24)

I need to multiply output filters (element wise) xi with yi in a way to get output with (batch, None, None, 24*24).
I try to do this with for cycles but get error “OperatorNotAllowedInGraphError: iterating over tf.Tensor is not allowed: AutoGraph did convert this function”.

Any advice to achieve this?

I’ve not personally verified this loss implentation with the original SOLO paper but check if it could help you as a baseline:

Thank you for the answer, I already looked into the mentioned implementation but I cannot find any answer for multiplication of two Conv2D layers in decoupled way.

Do you have a very small isolated example of what you want to achieve? E.g. two dummy input Tensors and the expected output?

According to paper I need elementwise multiplication of output maps between two conv layers. In numpy something like this:

batch_size = 3
xi = np.ones((batch_size, 10, 10, 24))
yi = np.ones((batch_size, 10, 10, 24))

results = []
for i in range(24):
  for k in range(24):
     results.append(xi[:,:,:,i]*yi[:,:,:,k])

In tensorflow 2 I can run simple example like this, but it fails during training:

a = np.zeros((3, 10, 10, 24))
b = np.zeros((3, 10, 10, 24))
mask_preds = []
for i in range(24):
  for j in range(24):
    mask_pred = tf.multiply(a[:,:,:,i], b[:,:,:,j])
    mask_preds.append(mask_pred.numpy())

mask_preds = tf.constant(mask_preds)
mask_preds = tf.transpose(mask_preds, [1,2,3,0]) # (batch, 10, 10, 24*24)

If you mean that is failing with fit etc. is that your are then running this loop in graph mode and you are in the same case as:

Probably if you will pass run_eagerly=True it will run fine but it will slower in eager mode with these nested loops.

https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile

Did you find a similar loop in the official Solo implementation at:

1 Like

I tried also with run_eagerly=True, but got AttributeError: 'Tensor' object has no attribute 'numpy'. And when not using numpy() i got error on tf.constant(mask_preds) of TypeError: Expected any non-tensor type, got a tensor instead and other problems.

The mentioned repo is in XIinlong-SOLO is implemented in Pytorch and looking into it they have a bit different approach. They use a lot of loops…

Maybe I will just skip the decoupled head and use coupled head which should be more straightforward for implementation as I don’t need to multiply two Conv2D layers.

I just thought that multiplying two

Have you checked the decoupled head in:

If somebody was looking for answer here it is:

@tf.function
def outerprodflatten(x, y, channel_dims):
    """
    According to StackOverflow:
    https://stackoverflow.com/questions/68361071/multiply-outputs-from-two-conv2d-layers-in-tensorflow-2
    """
    return tf.repeat(x,channel_dims,-1)*tf.tile(y,[1,1,1,channel_dims])