abhinav-neil / clip-zs-prompting
Using CLIP for zero-shot learning and image classification with text & visual prompting.
☆15Updated 2 years ago
Alternatives and similar repositories for clip-zs-prompting:
Users that are interested in clip-zs-prompting are comparing it to the libraries listed below
- code for studying OpenAI's CLIP explainability☆31Updated 3 years ago
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆72Updated last year
- Code and results accompanying our paper titled CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets☆57Updated last year
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆56Updated 10 months ago
- FInetuning CLIP for Few Shot Learning☆41Updated 3 years ago
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆105Updated last year
- [ICLR 2023] Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning"☆59Updated last year
- Code for CVPR 2023 paper "SViTT: Temporal Learning of Sparse Video-Text Transformers"☆18Updated last year
- Official repository for "Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting" [CVPR 2023]☆116Updated last year
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆53Updated last year
- [COLING'25] HGCLIP: Exploring Vision-Language Models with Graph Representations for Hierarchical Understanding☆39Updated 4 months ago
- Implementation for "DualCoOp: Fast Adaptation to Multi-Label Recognition with Limited Annotations" (NeurIPS 2022))☆59Updated last year
- Visual self-questioning for large vision-language assistant.☆41Updated 6 months ago
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆86Updated last year
- The efficient tuning method for VLMs☆81Updated last year
- This repository houses the code for the paper - "The Neglected of VLMs"☆28Updated 4 months ago
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).☆114Updated 3 years ago
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆69Updated 2 months ago
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆160Updated last year
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆163Updated last year
- TupleInfoNCE ICCV21☆16Updated 2 years ago
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆53Updated last year
- [CVPR 2025] Few-shot Recognition via Stage-Wise Retrieval-Augmented Finetuning☆15Updated 3 weeks ago
- [CVPR 2024 Highlight] OpenBias: Open-set Bias Detection in Text-to-Image Generative Models☆23Updated 2 months ago
- ☆23Updated 2 years ago
- [NeurIPS 2023] Meta-Adapter☆48Updated last year
- LiVT PyTorch Implementation.☆70Updated 2 years ago
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"☆33Updated last week
- Easy wrapper for inserting LoRA layers in CLIP.☆31Updated 10 months ago
- ☆39Updated 3 months ago