bruceyo / V-PETLLinks
Towards a Unified View on Visual Parameter-Efficient Transfer Learning
☆26Updated 2 years ago
Alternatives and similar repositories for V-PETL
Users that are interested in V-PETL are comparing it to the libraries listed below
Sorting:
- ☆59Updated 5 months ago
- Turning to Video for Transcript Sorting☆48Updated 2 years ago
- Compress conventional Vision-Language Pre-training data☆52Updated 2 years ago
- 📍 Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)☆53Updated last year
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"☆67Updated 11 months ago
- Repository for the paper: Teaching Structured Vision & Language Concepts to Vision & Language Models☆47Updated 2 years ago
- ☆25Updated 2 years ago
- ☆62Updated 2 years ago
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆43Updated last year
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆167Updated 2 years ago
- ☆27Updated last year
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆104Updated 2 years ago
- [CVPR2022] PyTorch re-implementation of Prompt Distribution Learning☆18Updated 2 years ago
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆73Updated 2 years ago
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).☆116Updated 3 years ago
- Official code for "Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models" (TCSVT'2023)☆28Updated last year
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆51Updated 5 months ago
- Seeing What You Miss: Vision-Language Pre-training with Semantic Completion Learning☆20Updated last year
- (ICML 2024) Improve Context Understanding in Multimodal Large Language Models via Multimodal Composition Learning☆27Updated last year
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆57Updated last year
- ☆101Updated last year
- [ICCV 2023] ALIP: Adaptive Language-Image Pre-training with Synthetic Caption☆99Updated 2 years ago
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.☆73Updated 2 years ago
- ☆59Updated 3 years ago
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆22Updated last year
- Code and results accompanying our paper titled CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets☆58Updated 2 years ago
- Distilling Large Vision-Language Model with Out-of-Distribution Generalizability (ICCV 2023)☆59Updated last year
- Visual self-questioning for large vision-language assistant.☆43Updated 2 months ago
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)☆42Updated last year