<dd id="om44c"><optgroup id="om44c"></optgroup></dd>
  • <xmp id="om44c"><nav id="om44c"></nav>
    <xmp id="om44c"><nav id="om44c"></nav>
    <menu id="om44c"></menu>

    科學研究

    Research

    首頁 >  論文  > 詳情

    UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning

    發表會議及期刊:ICLR

    Kunchang Li123*, YaliWang1*, Peng Gao3, Guanglu Song4

    Yu Liu4, Hongsheng Li 5, Yu Qiao13?

    1ShenZhen Key Lab of Computer Vision and Pattern Recognition, SIAT-SenseTime Joint Lab,

    Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences

    2University of Chinese Academy of Sciences, 3Shanghai AI Laboratory, Shanghai, China

    4SenseTime Research, 5The Chinese University of Hong Kong

    {kc.li,yl.wang}@siat.ac.cn, {gaopeng,qiaoyu}@pjlab.org.cn

    songguanglu@sensetime.com, liuyuisanai@gmail.com

    hsli@ee.cuhk.edu.hk


    ABSTRACT

    It is a challenging task to learn rich and multi-scale spatiotemporal semantics from high-dimensional videos, due to large local redundancy and complex global dependency between video frames. The recent advances in this research have been mainly driven by 3D convolutional neural networks and vision transformers. Although 3D convolution can efficiently aggregate local context to suppress local redundancy from a small 3D neighborhood, it lacks the capability to capture global dependency because of the limited receptive field. Alternatively, vision transformers can effectively capture long-range dependency by self-attention mechanism, while having the limitation on reducing local redundancy with blind similarity comparison among all the tokens in each layer. Based on these observations, we propose a novel Unified transFormer (UniFormer) which seamlessly integrates merits of 3D convolution and spatiotemporal self-attention in a concise transformer format, and achieves a preferable balance between computation and accuracy. Different from traditional transformers, our relation aggregator can tackle both spatiotemporal redundancy and dependency, by learning local and global token affinity respectively in shallow and deep layers. We conduct extensive experiments on the popular video benchmarks, e.g., Kinetics-400, Kinetics-600, and Something-Something V1&V2. With only ImageNet-1K pretraining, our UniFormer achieves 82.9%/84.8% top-1 accuracy on Kinetics-400/Kinetics-600, while requiring 10 fewer GFLOPs than other state-of-the-art methods. For Something-Something V1 and V2, our UniFormer achieves new state-of-the-art performances of 60.9% and 71.2% top-1 accuracy respectively. Code is available at https://github.com/Sense-X/UniFormer.


    comm@pjlab.org.cn

    上海市徐匯區云錦路701號西岸國際人工智能中心37-38層

    滬ICP備2021009351號-1

    最近中文字幕国语免费完整