ml-jku / MIM-RefinerLinks
A Contrastive Learning Boost from Intermediate Pre-Trained Representations
☆42Updated 11 months ago
Alternatives and similar repositories for MIM-Refiner
Users that are interested in MIM-Refiner are comparing it to the libraries listed below
Sorting:
- ☆32Updated last year
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 11 months ago
- PyTorch implementation of Semi-supervised Vision Transformers☆59Updated 2 years ago
- PyTorch implementation of R-MAE https//arxiv.org/abs/2306.05411☆114Updated 2 years ago
- Official Implementation of the CrossMAE paper: Rethinking Patch Dependence for Masked Autoencoders☆117Updated 4 months ago
- Code and models for the paper "The effectiveness of MAE pre-pretraining for billion-scale pretraining" https://arxiv.org/abs/2303.13496☆92Updated 4 months ago
- This is a offical PyTorch/GPU implementation of SupMAE.☆78Updated 2 years ago
- PyTorch code and pretrained weights for the UNIC models.☆37Updated 11 months ago
- PyTorch reimplementation of FlexiViT: One Model for All Patch Sizes☆62Updated last year
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆98Updated last year
- [NeurIPS2022] This is the official implementation of the paper "Expediting Large-Scale Vision Transformer for Dense Prediction without Fi…☆85Updated last year
- [CVPR'23 & TPAMI'25] Hard Patches Mining for Masked Image Modeling & Bootstrap Masked Visual Modeling via Hard Patch Mining☆101Updated 4 months ago
- [CVPR24] Official Implementation of GEM (Grounding Everything Module)