Extracting feature maps right after each conv layer

A question to experts

is it possible to extract feature maps right after each conv layer as numpy arrays and do computations on them then convert the resulted feature maps arrays back to tensors to feed them to the next layer in the model?

If that is possible, please let me know, coz i am hopeless as i tried to do that many time and i failed.

To make it clear, please have a look at the following example :

inputs = Input(shape=(48,48,3))
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)

####   here i need to get the activation maps of conv1 as numpy arrays   ####

pool1 = MaxPooling2D((2, 2))(conv1)    

#shape=(None, 64, 24, 24)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2)
pool2 = MaxPooling2D((2, 2))(conv2)  

Note: i am using

tf 2.7 ,

Keras 2.6,

python 3.8.1

Why you need to manipulate it in numpy?

You can always create your custom layer or lambda layer:

If you want we have also a numpy API in TF:

1 Like

Thanks for your reply,
actually i need to convert the tensor generated by a convolution layer to numpy coz i have to find the maximum and minimum pixel values in each feature map. this helps me to identify the range of pixels in each feature map and then do the rest of computations.

i tried to convert a tensor to numpy using mytensor.numpy() inside custom layer. however this also didn’t work.

Why you couldn’t do that with TF ops or TF.numpy ops?

1 Like

For TF ops like :
tf.math.reduce_max , tf.math.reduce_min , tf.math.maximum or tf.math.minimum
all these TF functions don’t return the maximum/minimum value in each feature map, however their functionality is based on specific axis which is somehow different to what i need.

It would be very kind of you if you can recommend other functions in TF (if any) that could extract max/min values from each feature map. without the need to do TF–> Numpy conversion

For TF.numpy ops:
As i am new to tensorflow and keras, In fact, i just knew about such type of TF.
and after going through TF.numpy ops documentation , it seems there is marginal differences with regular TF. i don’t know whether i can use TF.numpy ops to build models instead of regular TF.

Thanks for being patient with me :pray: :hugs:

Can you explain what operation you want to do?

If you like the numpy style API you can use it. See:

1 Like

taking VGG feature maps as an example :

VGG_model= Model(inputs=VGG_model.input, outputs=VGG_model.get_layer('block1_conv2').output)  # to take the features of the first VGG block
features_maps = VGG_model.predict(img)         # extracting feature maps from the img
single_fm= features_maps[1]          # taking single feature map from the generated feature_maps 
maxval= np.max(single_fm)
minval= np.min(single_fm)

in the above example , i can perform numpy operations like (max/min) coz the result of VGG_model.predictis numpy array by default.
However, if i need to apply this code in between the layers of my model (as stated in post above), the type of each layer is kerastensor. Therefore, i need to convert kerastensor to numpy to do the max/min operations on each individual feature map.

hope this is clear to you

Yes but here I see only that you are extracting a single feature map taking max and min.

What you really need to do with the feature map before passing it to the next layer?

1 Like

Exactly, this is only one feature map. then i should apply this process to all other feature maps generated by a particular convolution layer.
I need to extract the min and max values of each feature map so i can identify the range of values each feature map consist of. Doing so, will assist me to figure out the range of values that each feature map consist of. then after, i can do enhancement process to those feature maps with the assist of the derived information.
as a result, i will get new feature maps with new pixel values i set through other equations that emphasis on foreground pixels and suppress background ones.

To have an example can you write your dummy constant Tensor example with the shape that you want and show me what min and max do you want to extract from that tensor as output?

import tensorflow as tf
tensor = tf.constant([[[1,2,3],[3,4,5]],[[6,7,8],[9,10,11]]])
print(tensor.shape)
print(tensor)
1 Like

Please, find the code on google colab using this link:

Thanks

You need to add permission

1 Like

oops, sorry my bad.
please try this link:

It is a too long example, can you write your input tensor manually and tellme what tensor output you want like

import tensorflow as tf
tensor = tf.constant([[[1,2,3],[3,4,5]],[[6,7,8],[9,10,11]]])
print(tensor.shape)
print(tensor)
1 Like

Sure, in the following model, i need to convert conv1 layer to numpy :

input = Input(shape=(48, 48, 1))
conv1 = Conv2D(32, kernel_size=5, padding='same')(input)
# here i need to convert conv1 to numpy #

Conv1 is a KerasTensor of shape ([None, 48, 48, 32]) i need to convert it to numpy to iterate over the 32 feature maps and manipulate them individually, then wrap them all into single list and convert it to KerasTensor to be fed it to the next layer in the model

Note: print(conv1) results :
KerasTensor(type_spec=TensorSpec(shape=(None, 48, 48, 32), dtype=tf.float32, name=None), name=‘conv2d/BiasAdd:0’, description=“created by layer ‘conv2d’”)

and, conv1.shape results :
(None, 48, 48, 32)
Thanks alot for your patience and help :pray: :bouquet: :rose:

Generally I suggest you to start to express what you want to achieve with something simpler like a tensor manipulation without introducing layer input etc.
In this way you can check if you could vectorize the operations that you need with tensorflow ops cause manually iterate over tensor with loops is going to be quite inefficient in general.

I suggest you to start with a small manual filled tensor that is your dummy feature map.

E.g.

import tensorflow as tf
tensor = tf.constant([[[1,2,3],[3,4,5]],[[6,7,8],[9,10,11]]])
print(tensor.shape)
print(tensor)
print(tf.reduce_max(tensor, [[2][0]]))
print(tf.reduce_min(tensor, [[2][0]]))
(2, 2, 3)

tf.Tensor(
[[[ 1  2  3]
  [ 3  4  5]]
 [[ 6  7  8]
  [ 9 10 11]]], shape=(2, 2, 3), dtype=int32)

tf.Tensor(
[[ 3  5]
 [ 8 11]], shape=(2, 2), dtype=int32)

tf.Tensor(
[[1 3]
 [6 9]], shape=(2, 2), dtype=int32)

Then I think that you can define your own “dummy” manual feature map with a small tensor:

I have this small dummy input Tensor example and I want this output Tensor.

1 Like

Thank you so much dear sir for your kind help and support. you really had long breadth trying to simplify issue to me. i will start over as you advice hopefully i could figure it out with tensor ops only.
Thanks once again. :pray: :bouquet: :rose: :pray:

same problem, use model.get_layer(‘den_shape’).output to get feature map, its dypte is KerasTensor, error to convert KerasTensor to numpy, have you get solution? I guess its related to Eager Execution vs. Graph Execution.