Keras Concatenate Performance

I am using Keras Conv2D layers with one Concatenate layer to produce filtered images. This makes the model more clear but I am not sure about the performance of such constructs . Conv2D uses TensorRT and is very optimized for different platforms, but it is still interesting how Concatenate is implemented, if it can run several similar convolution layers calculations concurrently? I need advice on whether this image preprocessing approach is feasible in terms of performance?

For example, this is my implementation of Gauss image filter with Conv2D:

kernel = np.array([                  
    [[[0.00296902]],      [[0.0133062]],       [[0.0219382]],       [[0.0133062]],       [[0.00296902]]],
    [[[0.0133062]],       [[0.0596343]],       [[0.0983203]],       [[0.0596343]],       [[0.0133062]]],
    [[[0.0219382]],       [[0.0983203]],       [[0.162103]],        [[0.0983203]],       [[0.0219382]]],
    [[[0.0133062]],       [[0.0596343]],       [[0.0983203]],       [[0.0596343]],       [[0.0133062]]],
    [[[0.00296902]],      [[0.0133062]],       [[0.0219382]],       [[0.0133062]],       [[0.00296902]]]   ], dtype="float32")
    
init = tf.constant_initializer(kernel)
conv1 = tf.keras.layers.Conv2D(1, kernel_size=(5, 5), input_shape=(256, 256, 1), padding='same', kernel_initializer=init, use_bias = False)(red_input)
conv2 = tf.keras.layers.Conv2D(1, kernel_size=(5, 5), input_shape=(256, 256, 1), padding='same', kernel_initializer=init, use_bias = False)(green_input)
conv3 = tf.keras.layers.Conv2D(1, kernel_size=(5, 5), input_shape=(256, 256, 1), padding='same', kernel_initializer=init, use_bias = False)(blue_input)
conv1.trainable = False
conv2.trainable = False
conv3.trainable = False
concatenated = tf.keras.layers.Concatenate()([conv1,conv2,conv3])
model = Model(inputs=[red_input, green_input, blue_input], outputs=concatenated, name='Color_Filter')

I use 3 Conv2D layers that preprocess the input image color channels in parallel. The output is a color filtered image that can be passed to the other layers below. Similar filter constructions can be used and for more complex filters and other transformations like scale, etc. For anyone interested please see my blog.