MCG-NJU / VideoMAELinks
[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
☆1,505Updated last year
Alternatives and similar repositories for VideoMAE
Users that are interested in VideoMAE are comparing it to the libraries listed below
Sorting:
- [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking☆641Updated 7 months ago
- VideoX: a collection of video cross-modal models☆1,026Updated 11 months ago
- The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"☆1,694Updated last year
- This is an official implementation for "Video Swin Transformers".☆1,549Updated 2 years ago
- ☆855Updated last year
- [ICLR2022] official implementation of UniFormer☆866Updated last year
- [ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding☆955Updated 10 months ago
- Code release for ActionFormer (ECCV 2022)☆492Updated last year
- Code Release for MViTv2 on Image Recognition.☆426Updated 6 months ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆1,887Updated last week
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆985Updated last year
- Implementation of ViViT: A Video Vision Transformer☆537Updated 3 years ago
- [ICCV2023] UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer☆315Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,496Updated 9 months ago
- This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"☆554Updated last year
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,269Updated 3 years ago
- [ICLR'23] AIM: Adapting Image Models for Efficient Video Action Recognition☆293Updated last year
- PyTorch implementation of a collections of scalable Video Transformer Benchmarks.☆297Updated 3 years ago
- Official Open Source code for "Masked Autoencoders As Spatiotemporal Learners"☆338Updated 6 months ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆946Updated last year
- Video Swin Transformer - PyTorch☆254Updated 3 years ago
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,110Updated last year
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆975Updated 2 years ago
- Omnivore: A Single Model for Many Visual Modalities☆564Updated 2 years ago
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆831Updated 10 months ago
- Implementation of TimeSformer from Facebook AI, a pure attention-based solution for video classification☆716Updated 3 years ago
- This is a collection of our NAS and Vision Transformer work.☆1,759Updated 10 months ago
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,370Updated last year
- Code release for ConvNeXt V2 model☆1,747Updated 9 months ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆1,966Updated last year