Imageomics / INTRLinks
This is an official implementation for [ICLR'24] INTR: Interpretable Transformer for Fine-grained Image Classification.
☆51Updated last year
Alternatives and similar repositories for INTR
Users that are interested in INTR are comparing it to the libraries listed below
Sorting:
- ☆42Updated last year
- [CVPR 2024] Code for our Paper "DeiT-LT: Distillation Strikes Back for Vision Transformer training on Long-Tailed Datasets"☆41Updated 6 months ago
- Generating Image Specific Text☆28Updated last year
- PyTorch reimplementation of FlexiViT: One Model for All Patch Sizes☆62Updated last year
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation☆48Updated this week
- Official Implementation of Attentive Mask CLIP (ICCV2023, https://arxiv.org/abs/2212.08653)☆32Updated last year
- TRT for WSOL☆30Updated last year
- [CVPR 2023 Highlight] Masked Image Modeling with Local Multi-Scale Reconstruction☆50Updated 2 years ago
- [CVPR'23 & TPAMI'25] Hard Patches Mining for Masked Image Modeling☆98Updated 3 months ago
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆74Updated last month
- [ICLR 2023] Masked Frequency Modeling for Self-Supervised Visual Pre-Training☆75Updated 2 years ago
- MaskCon: Masked Contrastive Learning for Coarse-Labeled Dataset (CVPR2023)☆34Updated 3 months ago
- [NeurIPS 2022] code for the paper, SemMAE: Semantic-guided masking for learning masked autoencoders☆38Updated 2 years ago
- CLIP-Mamba: CLIP Pretrained Mamba Models with OOD and Hessian Evaluation☆75Updated 11 months ago
- Source code of the paper Fine-Grained Visual Classification via Internal Ensemble Learning Transformer