MCG-NJU / VideoMAELinks
[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
☆1,613Updated last year
Alternatives and similar repositories for VideoMAE
Users that are interested in VideoMAE are comparing it to the libraries listed below
Sorting:
- [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking☆697Updated last year
- This is an official implementation for "Video Swin Transformers".☆1,602Updated 2 years ago
- The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"☆1,793Updated last year
- ☆910Updated last year
- VideoX: a collection of video cross-modal models☆1,047Updated last year
- [ICLR2022] official implementation of UniFormer☆887Updated last year
- Implementation of ViViT: A Video Vision Transformer☆556Updated 4 years ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆1,001Updated last year
- Code release for ActionFormer (ECCV 2022)☆522Updated last year
- [ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding☆1,024Updated last year
- This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"☆592Updated last year
- Code Release for MViTv2 on Image Recognition.☆445Updated 11 months ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,183Updated last year
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,107Updated 3 months ago
- PyTorch implementation of a collections of scalable Video Transformer Benchmarks.☆304Updated 3 years ago
- Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification☆726Updated 4 years ago
- [ICCV2023] UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer☆333Updated last year
- Grounded Language-Image Pre-training☆2,531Updated last year
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,038Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,602Updated last year
- Extract video features from raw videos using multiple GPUs. We support RAFT flow frames as well as S3D, I3D, R(2+1)D, VGGish, CLIP, and T…☆626Updated 9 months ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,306Updated 3 years ago
- Video Swin Transformer - PyTorch☆267Updated 3 years ago
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆1,004Updated 3 years ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆749Updated 3 years ago
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,355Updated last year
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,441Updated 5 months ago
- Official Open Source code for "Masked Autoencoders As Spatiotemporal Learners"☆354Updated 11 months ago
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆774Updated 3 years ago
- This is a collection of our NAS and Vision Transformer work.☆1,804Updated last year