Code for Finetune like you pretrain: Improved finetuning of zero-shot vision models
☆106Aug 13, 2023Updated 2 years ago
Alternatives and similar repositories for FLYP
Users that are interested in FLYP are comparing it to the libraries listed below
Sorting:
- This repo implements the CVPR23 paper Trainable Projected Gradient Method for Robust Fine-tuning☆24Nov 27, 2023Updated 2 years ago
- Robust fine-tuning of zero-shot models☆760Apr 29, 2022Updated 3 years ago
- Framework code with wandb, checkpointing, logging, configs, experimental protocols. Useful for fine-tuning models or training from scratc…☆154Jan 14, 2023Updated 3 years ago
- ☆13Apr 7, 2024Updated last year
- [ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"☆149Apr 21, 2024Updated last year
- ☆16Sep 29, 2024Updated last year
- ☆95Sep 23, 2023Updated 2 years ago
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆168Jul 15, 2023Updated 2 years ago
- PyTorch implementation of our ECCV 2022 paper "Rethinking Confidence Calibration for Failure Prediction"☆26Jun 10, 2023Updated 2 years ago
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆381Jun 1, 2023Updated 2 years ago
- ☆106Dec 7, 2023Updated 2 years ago
- About Code Release for "CLIPood: Generalizing CLIP to Out-of-Distributions" (ICML 2023), https://arxiv.org/abs/2302.00864☆70Sep 17, 2023Updated 2 years ago
- ☆666Nov 28, 2023Updated 2 years ago
- ☆64Apr 9, 2024Updated last year
- Test-Time Adaptation via Conjugate Pseudo-Labels☆42May 25, 2023Updated 2 years ago
- Scene and animal attribute retrieval from camera trap data with domain-adapted vision-language models☆28Mar 8, 2024Updated 2 years ago
- An official PyTorch implementation for CLIPPR☆30Jul 22, 2023Updated 2 years ago
- ☆25Jun 22, 2023Updated 2 years ago
- [ICCV 2023] Bayesian Prompt Learning for Image-Language Model Generalization☆40Oct 6, 2023Updated 2 years ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆223Dec 16, 2022Updated 3 years ago
- the code for paper "Energy-Based Open-World Uncertainty Modeling for Confidence Calibration"☆41Aug 26, 2021Updated 4 years ago
- Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"☆79May 5, 2024Updated last year
- [TMLR'24] This repository includes the official implementation our paper "Unleashing the Power of Visual Prompting At the Pixel Level"☆42Apr 30, 2024Updated last year
- Code for T-MARS data filtering☆35Aug 23, 2023Updated 2 years ago
- [NeurIPS 2024] "Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection"☆13Oct 28, 2024Updated last year
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,184May 20, 2024Updated last year
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆141Dec 16, 2025Updated 3 months ago
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆85May 24, 2024Updated last year
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆105Aug 22, 2023Updated 2 years ago
- [ICML'24] Open-Vocabulary Calibration for Fine-tuned CLIP☆18Jun 14, 2024Updated last year
- source code for NeurIPS'24 paper "Towards Calibrated Robust Fine-Tuning of Vision-Language Models"☆14Oct 31, 2025Updated 4 months ago
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Apr 4, 2024Updated last year
- Model calibration in CLIP Adapters☆20Aug 19, 2024Updated last year
- Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"☆85Apr 21, 2024Updated last year
- Cross-modal few-shot adaptation with CLIP☆352Apr 29, 2025Updated 10 months ago
- ICCV 2023: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No☆142Dec 2, 2023Updated 2 years ago
- Learning to compose soft prompts for compositional zero-shot learning.☆94Sep 13, 2025Updated 6 months ago
- [ECCV 2022] "Improve Few-Shot Transfer Learning with Low-Rank Decompose and Align" by Ziyu Jiang, Tianlong Chen, Xuxi Chen, Yu Cheng, Luo…☆13Jul 19, 2022Updated 3 years ago
- Evaluate robustness of adaptation methods on large vision-language models☆19Aug 23, 2023Updated 2 years ago