Prune Spatio-temporal Tokens by Semantic-aware Temporal Accumulation

Shuangrui Ding 1
Peisen Zhao 2
Xiaopeng Zhang 2
Rui Qian 3
Hongkai Xiong 1
Qi Tian 2
1Shanghai Jiao Tong University   2Huawei Cloud   3The Chinese University of Hong Kong  

ICCV 2023

alt text

Abstract

Transformers have become the primary backbone of the computer vision community due to their impressive perfor- mance. However, the unfriendly computation cost impedes their potential in the video recognition domain. To opti- mize the speed-accuracy trade-off, we propose Semantic- aware Temporal Accumulation score (STA) to prune spatio- temporal tokens integrally. STA score considers two crit- ical factors: temporal redundancy and semantic impor- tance. The former depicts a specific region based on whether it is a new occurrence or a seen entity by ag- gregating token-to-token similarity in consecutive frames while the latter evaluates each token based on its contri- bution to the overall prediction. As a result, tokens with higher scores of STA carry more temporal redundancy as well as lower semantics thus being pruned. Based on the STA score, we are able to progressively prune the tokens without introducing any additional parameters or requir- ing further re-training. We directly apply the STA module to off-the-shelf ViT and VideoSwin backbones, and the em- pirical results on Kinetics-400 and Something-Something V2 achieve over 30% computation reduction with a neg- ligible ∼ 0.2% accuracy drop. The code is released at here.


Publications

Prune Spatio-temporal Tokens by Semantic-aware Temporal Accumulation.
Shuangrui Ding, Peisen Zhao, Xiaopeng Zhang, Rui Qian, Hongkai Xiong, Qi Tian
ICCV, 2023





Acknowledgements

This work is done when Shuangrui Ding was an intern at Huawei Cloud and was supported in part by the National Natural Science Foundation of China under Grant 62250055, Grant 61932022, Grant 62120106007, and in part by the Program of Shanghai Science and Technology Innovation Project un- der Grant 20511100100.



Webpage template modified from Richard Zhang.