SwinTransformer / Video-Swin-TransformerLinks
This is an official implementation for "Video Swin Transformers".
☆1,623Updated 2 years ago
Alternatives and similar repositories for Video-Swin-Transformer
Users that are interested in Video-Swin-Transformer are comparing it to the libraries listed below
Sorting:
- The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"☆1,820Updated last year
- [ICLR2022] official implementation of UniFormer☆892Updated last year
- Implementation of ViViT: A Video Vision Transformer☆556Updated 4 years ago
- ☆928Updated last year
- Video Swin Transformer - PyTorch☆266Updated 4 years ago
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training☆1,653Updated 2 years ago
- Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification☆727Updated 4 years ago
- PyTorch implementation of a collections of scalable Video Transformer Benchmarks.☆304Updated 3 years ago
- Code Release for MViTv2 on Image Recognition.☆451Updated last year
- [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking☆733Updated last year
- Official implementation of PVT series☆1,875Updated 3 years ago
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆1,018Updated 3 years ago
- This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"☆597Updated 2 years ago
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,361Updated last year
- Recent Transformer-based CV and related works.☆1,338Updated 2 years ago
- This is a collection of our NAS and Vision Transformer work.☆1,819Updated last year
- Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)☆2,118Updated 3 years ago
- Code release for ActionFormer (ECCV 2022)☆534Updated last year
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,309Updated 4 years ago
- ☆1,033Updated 5 years ago
- [ICCV 2019] TSM: Temporal Shift Module for Efficient Video Understanding☆2,173Updated last year
- ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet☆1,191Updated 2 years ago
- Implementation of the Swin Transformer in PyTorch.☆854Updated 4 years ago
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,457Updated 7 months ago
- Repository of Vision Transformer with Deformable Attention (CVPR2022) and DAT++: Spatially Dynamic Vision Transformerwith Deformable Atte…☆918Updated last year
- VideoX: a collection of video cross-modal models☆1,053Updated last year
- [ICCV2023] UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer☆338Updated last year
- Code release for ConvNeXt V2 model☆1,942Updated last year
- assistant tools for attention visualization in deep learning☆1,257Updated 3 years ago
- Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners☆2,683Updated 2 years ago