GitGyun / visual_token_matchingLinks
[ICLR'23 Oral] Universal Few-shot Learning of Dense Prediction Tasks with Visual Token Matching
☆253Updated last year
Alternatives and similar repositories for visual_token_matching
Users that are interested in visual_token_matching are comparing it to the libraries listed below
Sorting:
- Exploring Visual Prompts for Adapting Large-Scale Models☆280Updated 2 years ago
- Is synthetic data from generative models ready for image recognition?☆183Updated 2 years ago
- Official implementation and data release of the paper "Visual Prompting via Image Inpainting".☆310Updated last year
- (NeurIPS 2022) Self-Supervised Visual Representation Learning with Semantic Grouping☆97Updated 2 months ago
- (ICLR 2023) Official PyTorch implementation of "What Do Self-Supervised Vision Transformers Learn?"☆110Updated last year
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆173Updated last year
- Reading list for research topics in Masked Image Modeling☆333Updated 6 months ago
- Official implementation of TCL (CVPR 2023)☆112Updated 2 years ago
- PyTorch implementation of the paper "MILAN: Masked Image Pretraining on Language Assisted Representation" https://arxiv.org/pdf/2208.0604…☆83Updated 2 years ago
- Official PyTorch implementation of "Extract Free Dense Labels from CLIP" (ECCV 22 Oral)☆446Updated 2 years ago
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆106Updated last year
- Open-vocabulary Semantic Segmentation☆174Updated 2 years ago
- Official implementation of SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference☆156Updated 7 months ago
- ☆59Updated last year
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆535Updated last year
- Official code for "Top-Down Visual Attention from Analysis by Synthesis" (CVPR 2023 highlight)☆166Updated last year
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆257Updated last year
- [CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》☆151Updated last year
- IFSeg: Image-free Semantic Segmentation via Vision-Language Model (CVPR 2023)☆90Updated last year
- ECCV2022,Bootstrapped Masked Autoencoders for Vision BERT Pretraining☆97Updated 2 years ago
- [ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"☆148Updated last year
- [CVPR 2023] Official code for "Zero-shot Referring Image Segmentation with Global-Local Context Features"☆122Updated 2 months ago
- PyTorch implementation of R-MAE https//arxiv.org/abs/2306.05411☆113Updated last year
- Code for the paper "Hyperbolic Image-Text Representations", Desai et al, ICML 2023☆166Updated last year
- ☆59Updated 3 years ago
- [TPAMI] Searching prompt modules for parameter-efficient transfer learning.☆231Updated last year
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆216Updated 2 years ago
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆182Updated last year
- This is a PyTorch implementation of “Context AutoEncoder for Self-Supervised Representation Learning"☆196Updated 2 years ago
- PyTorch implementation of ICML 2023 paper "SegCLIP: Patch Aggregation with Learnable Centers for Open-Vocabulary Semantic Segmentation"☆92Updated last year