hub.KerasLayer object does not allow modification of existing layer parameters. This is debilitating to users who would like to use tfhub models in their own architecture with minor changes. Some specific use cases are mentioned below
- Residual networks (2D and 3D) can be dilated appropriately to increase the size of feature maps and they have been known to benefit performance in tasks such as semantic segmentation and image classification such as here.
- Removing temporal pooling in 3D CNNs. In many applications such as tracking and action detection, it is of interest to remove temporal pooling to keep same number of frames in input and output. This also requires modification of layer parameters.
- Access to intermediate activations is also important for explanability issues. This requires accessing intermediate layers which is currently not supported.
- Access to intermediate layers is also important to utilize multiple layers of a CNN directly in cases such as FPN and in some object detectors such as MSCNN.
I therefore believe that usability of tfhub models would increase tremendously if
hub.KerasLayer object is allowed to be modified. Is there any plan to consider this because it seems pretty important to improve the outreach of tfhub.