SwinTransformer / Video-Swin-TransformerLinks
This is an official implementation for "Video Swin Transformers".
☆1,627Updated 2 years ago
Alternatives and similar repositories for Video-Swin-Transformer
Users that are interested in Video-Swin-Transformer are comparing it to the libraries listed below
Sorting:
- The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"☆1,822Updated last year
- ☆931Updated last year
- [ICLR2022] official implementation of UniFormer☆895Updated last year
- Implementation of ViViT: A Video Vision Transformer☆556Updated 4 years ago
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training☆1,674Updated 2 years ago
- Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification☆727Updated 4 years ago
- Video Swin Transformer - PyTorch☆265Updated 4 years ago
- PyTorch implementation of a collections of scalable Video Transformer Benchmarks.☆305Updated 3 years ago
- [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking☆746Updated last year
- Code Release for MViTv2 on Image Recognition.☆449Updated last year
- Official implementation of PVT series☆1,882Updated 3 years ago
- Code release for ActionFormer (ECCV 2022)☆537Updated last year
- Recent Transformer-based CV and related works.☆1,339Updated 2 years ago
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆1,021Updated 3 years ago
- ☆1,035Updated 5 years ago
- [ICCV 2019] TSM: Temporal Shift Module for Efficient Video Understanding☆2,177Updated last year
- ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet☆1,193Updated 2 years ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,313Updated 4 years ago
- Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)☆2,119Updated 3 years ago
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,364Updated last year
- This is a collection of our NAS and Vision Transformer work.☆1,824Updated last year
- This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"☆600Updated 2 years ago
- Implementation of the Swin Transformer in PyTorch.☆856Updated 4 years ago
- OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark☆4,904Updated last year
- [ICCV2023] UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer☆339Updated last year
- OpenMMLab optical flow toolbox and benchmark☆1,043Updated last year
- Repository of Vision Transformer with Deformable Attention (CVPR2022) and DAT++: Spatially Dynamic Vision Transformerwith Deformable Atte…☆924Updated last year
- Code release for ConvNeXt V2 model☆1,956Updated last year
- VideoX: a collection of video cross-modal models☆1,058Updated last year
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,468Updated 8 months ago