saic-fi / LFA
[ICCV 2023] Black Box Few-Shot Adaptation for Vision-Language models
☆21Updated 11 months ago
Alternatives and similar repositories for LFA:
Users that are interested in LFA are comparing it to the libraries listed below
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆56Updated last year
- ☆26Updated last year
- This repository contains the code and datasets for our ICCV-W paper 'Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts…☆28Updated last year
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆46Updated last year
- This repository houses the code for the paper - "The Neglected of VLMs"☆28Updated 4 months ago
- ☆39Updated 3 months ago
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆38Updated last year
- LaFTer: Label-Free Tuning of Zero-shot Classifier using Language and Unlabeled Image Collections (NeurIPS 2023)☆28Updated last year
- [ICCV 2023] Bayesian Prompt Learning for Image-Language Model Generalization☆31Updated last year
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).☆114Updated 3 years ago
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆33Updated last year
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆39Updated 4 months ago
- Official Pytorch implementation of 'Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning'? (ICLR2024)☆14Updated last year
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆72Updated last year
- Code release for "Understanding Bias in Large-Scale Visual Datasets"☆20Updated 4 months ago
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)☆36Updated 8 months ago
- ☆11Updated 2 years ago
- ☆59Updated 3 years ago
- [NeurIPS 2023] Official Pytorch code for LOVM: Language-Only Vision Model Selection☆20Updated last year
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆42Updated last year
- [NeurIPS '24] Frustratingly easy Test-Time Adaptation of VLMs!!☆43Updated 3 weeks ago
- [CVPR2022] PyTorch re-implementation of Prompt Distribution Learning☆18Updated last year
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆69Updated last week
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆56Updated 10 months ago
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆32Updated last year
- Domain Generalization through Distilling CLIP with Language Guidance☆28Updated last year
- ☆185Updated last year
- source code for NeurIPS'23 paper "Dream the Impossible: Outlier Imagination with Diffusion Models"☆68Updated this week
- (CVPR 2023) Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning☆28Updated last year