Bug? TimeDistributed model compatibility

Just came across something rather strange. Seems like some architectures in keras.applications does not work directly with TimeDistributed.

For an example, you have the three architectures MobileNetV2, *V3, and a ConvNeXt-S architecture. If you use MobileNetV2 it works, but for the two others it does not.

import keras
from keras.applications import MobileNetV2, MobileNetV3Small, ConvNeXtSmall

input_ = keras.layers.Input(shape=(8, 224, 224, 3))

# base_model  = MobileNetV2(include_top=True, input_shape=(224, 224, 3))
base_model = MobileNetV3Small(include_top=True, input_shape=(224, 224, 3))
# base_model = ConvNeXtSmall(include_top=True, input_shape=(224, 224, 3))

output = keras.layers.TimeDistributed(base_model)(input_)
model = keras.Model(inputs=input_, outputs=output)

Tested with Python 3.8.10 and keras-nightly==2.10.0.dev2022060507. I have also tested with older versions of TF/Keras, but ConvNeXt is a new model application, and hence, I could only observe this behaviour for this model in the nightly build. However, for MobilNetV3 I saw the same using TF==2.8.0.

If you see the error prompts below, you can see that keras complains about two layers, where compute_output_shape cannot dynamically be catched. I have tried enabling eager mode, but same behaviour. Perhaps I am doing something wrong?

Error prompt for MobileNetV3:

Traceback (most recent call last):
File “.\test_timedistributed.py”, line 12, in
output = keras.layers.TimeDistributed(base_model)(input_)
File “C:\Users\47955\workspace\sandbox\venv\lib\site-packages\keras\utils\traceback_utils.py”, line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File “C:\Users\47955\workspace\sandbox\venv\lib\site-packages\keras\engine\base_layer.py”, line 879, in compute_output_shape
raise NotImplementedError(
NotImplementedError: Exception encountered when calling layer “time_distributed” (type TimeDistributed).

Please run in eager mode or implement the compute_output_shape method on your layer (TFOpLambda).

Call arguments received by layer “time_distributed” (type TimeDistributed):
• inputs=tf.Tensor(shape=(None, 8, 224, 224, 3), dtype=float32)
• training=False
• mask=None

and for ConvNeXtSmall:

Traceback (most recent call last):
File “.\test_timedistributed.py”, line 10, in
output = keras.layers.TimeDistributed(base_model)(input_)
File “C:\Users\47955\workspace\sandbox\venv\lib\site-packages\keras\utils\traceback_utils.py”, line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File “C:\Users\47955\workspace\sandbox\venv\lib\site-packages\keras\engine\base_layer.py”, line 879, in compute_output_shape
raise NotImplementedError(
NotImplementedError: Exception encountered when calling layer “time_distributed” (type TimeDistributed).

Please run in eager mode or implement the compute_output_shape method on your layer (LayerScale).

Call arguments received by layer “time_distributed” (type TimeDistributed):
• inputs=tf.Tensor(shape=(None, 8, 224, 224, 3), dtype=float32)
• training=None
• mask=None

As this seems like a bug inside Keras, I have posted an issue in the keras repo.
However, if anyone spots a simple fix, please, let me know.

2 Likes