VILA-Lab / i-maeLinks
i-mae Pytorch Repo
☆20Updated last year
Alternatives and similar repositories for i-mae
Users that are interested in i-mae are comparing it to the libraries listed below
Sorting:
- [WACV2025 Oral] DeepMIM: Deep Supervision for Masked Image Modeling☆53Updated 4 months ago
- This is a offical PyTorch/GPU implementation of SupMAE.☆78Updated 3 years ago
- Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformers☆26Updated 3 years ago
- Official codes for ConMIM (ICLR 2023)☆58Updated 2 years ago
- [ECCV2022] Revisiting the Critical Factors of Augmentation-Invariant Representation Learning☆12Updated 3 years ago
- ☆16Updated 2 years ago
- [TIP] Exploring Effective Factors for Improving Visual In-Context Learning☆19Updated 3 months ago
- Official Codes and Pretrained Models for RecursiveMix☆22Updated 2 years ago
- Pytorch implementation of our paper accepted by ECCV2022 -- Knowledge Condensation Distillation https://arxiv.org/abs/2207.05409☆30Updated 2 years ago
- ☆72Updated 6 months ago
- Original code base for On Pretraining Data Diversity for Self-Supervised Learning☆14Updated 9 months ago
- code base for vision transformers☆36Updated 3 years ago
- Code and models for the paper Glance-and-Gaze Vision Transformer☆28Updated 4 years ago
- Learning to Mask and Permute Visual Tokens for Vision Transformer Pre-Training☆16Updated 3 months ago
- Official implementation of the paper "Function-Consistent Feature Distillation" (ICLR 2023)☆29Updated 2 years ago
- A Close Look at Spatial Modeling: From Attention to Convolution☆91Updated 2 years ago
- Repo for the paper "Extrapolating from a Single Image to a Thousand Classes using Distillation"☆37Updated last year
- Benchmarking Attention Mechanism in Vision Transformers.☆18Updated 2 years ago
- [ICLR 2022] "As-ViT: Auto-scaling Vision Transformers without Training" by Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wa…☆76Updated 3 years ago
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".☆28Updated last year
- [ICME 2022] code for the paper, SimVit: Exploring a simple vision transformer with sliding windows.☆68Updated 2 years ago
- Code for Point-Level Regin Contrast (https//arxiv.org/abs/2202.04639)☆35Updated 2 years ago
- Code for ICML 2023 paper "When and How Does Known Class Help Discover Unknown Ones? Provable Understandings Through Spectral Analysis"☆13Updated 2 years ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆37Updated 2 years ago
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformers☆25Updated 7 months ago
- Code of "What Images are More Memorable to Machines?"☆15Updated 2 years ago
- [ICLR 2023] “ Layer Grafted Pre-training: Bridging Contrastive Learning And Masked Image Modeling For Better Representations”, Ziyu Jian…☆24Updated 2 years ago
- Official Pytorch Implementation of: "Semantic Diversity Learning for Zero-Shot Multi-label Classification"(ICCV, 2021) paper☆30Updated 3 years ago
- [ECCV 2022] AMixer: Adaptive Weight Mixing for Self-attention Free Vision Transformers☆28Updated 2 years ago
- ☆85Updated 3 years ago