Implementing ShiftViT

In our (/w @Ritwik_Raha) keras example we minimally implement ShiftViT.

ViTs can simultaneously model long- and short-range dependencies, thanks to the Multi-Head Self-Attention mechanism in the Transformer block. Many researchers believe that the success of ViTs are purely due to the attention layer, and they seldom think about other parts of the ViT model.

The authors of ShiftViT propose to demystify the success of ViTs with the introduction of a NO PARAMETER operation in place of the attention operation. They swap the attention operation with a shifting operation.

gif

Keras example: A Vision Transformer without Attention

2 Likes