facebookresearch / mae_stLinks
Official Open Source code for "Masked Autoencoders As Spatiotemporal Learners"
☆338Updated 6 months ago
Alternatives and similar repositories for mae_st
Users that are interested in mae_st are comparing it to the libraries listed below
Sorting:
- MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022☆578Updated 2 years ago
- A curated list of awesome self-supervised learning methods in videos☆140Updated 3 weeks ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆425Updated 2 years ago
- [Survey] Masked Modeling for Self-supervised Representation Learning on Vision and Beyond (https://arxiv.org/abs/2401.00897)☆331Updated last month
- Official Implementation of the CrossMAE paper: Rethinking Patch Dependence for Masked Autoencoders☆112Updated last month
- ConvMAE: Masked Convolution Meets Masked Autoencoders☆505Updated 2 years ago
- Code Release for MeMViT Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition, CVPR 2022☆148Updated 2 years ago
- [ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"☆326Updated 5 months ago
- Official code for "Top-Down Visual Attention from Analysis by Synthesis" (CVPR 2023 highlight)☆166Updated last year
- [NeurIPS 2022] Implementation of "AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition"☆357Updated 2 years ago
- Code Release for MViTv2 on Image Recognition.☆426Updated 6 months ago
- The official implementation of CMAE https://arxiv.org/abs/2207.13532 and https://ieeexplore.ieee.org/document/10330745☆103Updated last year
- (ICLR 2023) Official PyTorch implementation of "What Do Self-Supervised Vision Transformers Learn?"☆110Updated last year
- [T-PAMI] A curated list of self-supervised multimodal learning resources.☆254Updated 9 months ago
- Reading list for research topics in Masked Image Modeling☆333Updated 5 months ago
- Learning from synthetic data - code and models☆315Updated last year
- Code + pre-trained models for the paper Keeping Your Eye on the Ball Trajectory Attention in Video Transformers☆229Updated 2 years ago
- This is a PyTorch implementation of “Context AutoEncoder for Self-Supervised Representation Learning"☆110Updated last year
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆985Updated last year
- A PyTorch implementation of MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis☆564Updated 2 years ago
- [CVPR 2023] VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking☆641Updated 7 months ago
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆456Updated 3 years ago
- [ICLR'23] AIM: Adapting Image Models for Efficient Video Action Recognition☆293Updated last year
- This is a PyTorch implementation of “Context AutoEncoder for Self-Supervised Representation Learning"☆196Updated 2 years ago
- [CVPR'23] AdaMAE: Adaptive Masking for Efficient Spatiotemporal Learning with Masked Autoencoders☆79Updated last year
- Open source implementation of "Vision Transformers Need Registers"☆177Updated last month
- Implementation of Slot Attention from GoogleAI☆431Updated 9 months ago
- Official Codes for "Uniform Masking: Enabling MAE Pre-training for Pyramid-based Vision Transformers with Locality"☆242Updated 2 years ago
- A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).☆831Updated 10 months ago
- Code and models for the paper "The effectiveness of MAE pre-pretraining for billion-scale pretraining" https://arxiv.org/abs/2303.13496☆89Updated last month