MCG-NJU / VideoMAE
[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
☆1,323Updated 9 months ago
Related projects: ⓘ
- VideoX: a collection of video cross-modal models☆966Updated 3 months ago
- [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking☆485Updated 6 months ago
- [ICLR2022] official implementation of UniFormer☆816Updated 5 months ago
- This is an official implementation for "Video Swin Transformers".☆1,407Updated last year
- The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"☆1,514Updated 5 months ago
- Implementation of ViViT: A Video Vision Transformer☆503Updated 3 years ago
- ☆742Updated 4 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆1,300Updated 3 weeks ago
- [ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding☆779Updated 2 months ago
- Grounded Language-Image Pre-training☆2,154Updated 7 months ago
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆916Updated last year
- Code release for ActionFormer (ECCV 2022)☆419Updated 5 months ago
- PyTorch implementation of a collections of scalable Video Transformer Benchmarks.☆273Updated 2 years ago
- This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"☆496Updated 9 months ago
- Code Release for MViTv2 on Image Recognition.☆389Updated last week
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,201Updated 2 years ago
- [ICCV2023] UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer☆282Updated 5 months ago
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,215Updated 6 months ago
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆994Updated last year
- Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification☆688Updated 3 years ago
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)☆1,281Updated 3 months ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,209Updated last month
- Code release for ConvNeXt V2 model☆1,467Updated last month
- ☆812Updated 2 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,107Updated 2 months ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,037Updated 9 months ago
- (TPAMI 2024) A Survey on Open Vocabulary Learning☆794Updated 3 weeks ago
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆722Updated 2 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆1,679Updated 3 months ago
- This is a collection of our NAS and Vision Transformer work.☆1,655Updated last month