VILA-Lab / i-maeLinks
i-mae Pytorch Repo
☆20Updated last year
Alternatives and similar repositories for i-mae
Users that are interested in i-mae are comparing it to the libraries listed below
Sorting:
- [WACV2025 Oral] DeepMIM: Deep Supervision for Masked Image Modeling☆54Updated 5 months ago
- This is a offical PyTorch/GPU implementation of SupMAE.☆78Updated 3 years ago
- Official codes for ConMIM (ICLR 2023)☆58Updated 2 years ago
- ☆72Updated 7 months ago
- ☆16Updated 2 years ago
- Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformers☆26Updated 3 years ago
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".☆28Updated last year
- Official PyTorch implementation of our ECCV 2022 paper "Sliced Recursive Transformer"☆66Updated 3 years ago
- Official Pytorch implementation for Distilling Image Classifiers in Object detection (NeurIPS2021)☆32Updated 3 years ago
- [ECCV2022] Revisiting the Critical Factors of Augmentation-Invariant Representation Learning☆12Updated 3 years ago
- Code for Point-Level Regin Contrast (https//arxiv.org/abs/2202.04639)☆35Updated 2 years ago
- Official Codes and Pretrained Models for RecursiveMix☆22Updated 2 years ago
- Teach-DETR: Better Training DETR with Teachers☆31Updated last year
- [ECCV 2022] AMixer: Adaptive Weight Mixing for Self-attention Free Vision Transformers☆28Updated 2 years ago
- Code and models for the paper Glance-and-Gaze Vision Transformer☆28Updated 4 years ago
- Rethinking Nearest Neighbors for Visual Classification☆31Updated 3 years ago
- Code for You Only Cut Once: Boosting Data Augmentation with a Single Cut, ICML 2022.☆105Updated 2 years ago
- [ICME 2022] code for the paper, SimVit: Exploring a simple vision transformer with sliding windows.☆68Updated 3 years ago
- Benchmarking Attention Mechanism in Vision Transformers.☆18Updated 3 years ago
- [ICLR2024] Exploring Target Representations for Masked Autoencoders☆57Updated last year
- [TIP] Exploring Effective Factors for Improving Visual In-Context Learning☆19Updated 3 months ago
- ☆43Updated 2 years ago
- code base for vision transformers☆36Updated 3 years ago
- TokenMix: Rethinking Image Mixing for Data Augmentation in Vision Transformers (ECCV 2022)☆95Updated 3 years ago
- ☆85Updated 3 years ago
- A Close Look at Spatial Modeling: From Attention to Convolution☆91Updated 2 years ago
- Paper List for In-context Learning 🌷☆20Updated 2 years ago
- Code of CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping☆17Updated 3 years ago
- Prompt Generation Networks for Input-Space Adaptation of Frozen Vision Transformers. Jochem Loedeman, Maarten C. Stol, Tengda Han, Yuki M…☆42Updated last year
- ☆37Updated last year