cvl-umass / AdaptCLIPZSView external linksLinks
☆45Oct 5, 2025Updated 4 months ago
Alternatives and similar repositories for AdaptCLIPZS
Users that are interested in AdaptCLIPZS are comparing it to the libraries listed below
Sorting:
- This repository contains the code and datasets for our ICCV-W paper 'Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts…☆30Feb 21, 2024Updated last year
- Pytorch implementation for "Erasing the Bias: Fine-Tuning Foundation Models for Semi-Supervised Learning" (ICML 2024)☆24May 11, 2025Updated 9 months ago
- [ICCV 2023] Black Box Few-Shot Adaptation for Vision-Language models☆26May 14, 2024Updated last year
- [CVPRW 2025] Official repository of paper titled "Towards Evaluating the Robustness of Visual State Space Models"☆25Jun 8, 2025Updated 8 months ago
- PyTorch Implementation for InMaP☆11Oct 28, 2023Updated 2 years ago
- [ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"☆148Apr 21, 2024Updated last year
- [AAAI'25, CVPRW 2024] Official repository of paper titled "Learning to Prompt with Text Only Supervision for Vision-Language Models".☆121Dec 17, 2024Updated last year
- [ICLR 2025] - Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion☆60Nov 30, 2025Updated 2 months ago
- Python code to implement DeIL, a CLIP based approach for open-world few-shot learning.☆18Nov 4, 2024Updated last year
- Implementation of "DIME-FM: DIstilling Multimodal and Efficient Foundation Models"☆15Oct 12, 2023Updated 2 years ago
- ☆13Jul 17, 2024Updated last year
- [ICML2024] Official PyTorch implementation of CoMC: Language-Driven Cross-Modal Classifier for Zero-Shot Multi-Label Image Recognition☆16Jul 9, 2024Updated last year
- [NAACL'25] Contains code and documentation for our VANE-Bench paper.☆17Aug 19, 2025Updated 5 months ago
- ☆70Mar 10, 2025Updated 11 months ago
- Model calibration in CLIP Adapters☆19Aug 19, 2024Updated last year
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆80Jun 7, 2025Updated 8 months ago
- [NeurIPS 2024] WATT: Weight Average Test-Time Adaptation of CLIP☆56Sep 26, 2024Updated last year
- ☆22Apr 27, 2024Updated last year
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆90Jul 4, 2024Updated last year
- This repo contains the evaluation code for the INQUIRE benchmark☆64Dec 18, 2024Updated last year
- Code for the paper: "No Zero-Shot Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance" [NeurI…☆94Apr 29, 2024Updated last year
- Code for a research paper "Part-Based Models Improve Adversarial Robustness" (ICLR 2023)☆23Sep 16, 2023Updated 2 years ago
- Plotting heatmaps with the self-attention of the [CLS] tokens in the last layer.☆50May 11, 2022Updated 3 years ago
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆96Oct 19, 2024Updated last year
- Official code for "Can We Talk Models Into Seeing the World Differently?" (ICLR 2025).☆27Jan 26, 2025Updated last year
- SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models☆21Jan 11, 2024Updated 2 years ago
- Meta-Prompting for Automating Zero-shot Visual Recognition with LLMs (ECCV 2024)☆19Jul 15, 2024Updated last year
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆61Jul 8, 2023Updated 2 years ago
- [AAAI 2023] Zero-Shot Enhancement of CLIP with Parameter-free Attention☆93Apr 29, 2023Updated 2 years ago
- [NeurIPS '24] Frustratingly easy Test-Time Adaptation of VLMs!!☆60Mar 24, 2025Updated 10 months ago
- Code for Negative Yields Positive: Unified Dual-Path Adapter for Vision-Language Models☆27Oct 29, 2024Updated last year
- Source code of the paper Fine-Grained Visual Classification via Internal Ensemble Learning Transformer☆55Mar 28, 2024Updated last year
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆28Oct 28, 2024Updated last year
- Papers about Explainable AI (Deep Learning-based)☆28Nov 14, 2025Updated 2 months ago
- ☆61May 2, 2025Updated 9 months ago
- [ICLR 2024 Spotlight] "Negative Label Guided OOD Detection with Pretrained Vision-Language Models"☆30Oct 23, 2024Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆25Nov 23, 2024Updated last year
- A real pal when you want to add VGG16 to your Keras model.☆27May 17, 2016Updated 9 years ago
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆31Dec 30, 2024Updated last year