linzhiqiu / cross_modal_adaptation
Cross-modal few-shot adaptation with CLIP
☆322Updated this week
Alternatives and similar repositories for cross_modal_adaptation:
Users that are interested in cross_modal_adaptation are comparing it to the libraries listed below
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆362Updated last year
- ☆500Updated 2 years ago
- Official implementation of "Towards Efficient Visual Adaption via Structural Re-parameterization".☆179Updated 10 months ago
- [AAAI 2023] Zero-Shot Enhancement of CLIP with Parameter-free Attention☆85Updated last year
- ☆581Updated last year
- GMoE could be the next backbone model for many kinds of generalization task.☆265Updated last year
- [CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"☆267Updated last month
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆250Updated last year
- [Survey] Masked Modeling for Self-supervised Representation Learning on Vision and Beyond (https://arxiv.org/abs/2401.00897)☆317Updated 4 months ago
- [ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"☆143Updated 10 months ago
- [CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".☆706Updated last year
- Code for AAAl 2024 paper: Relax Image-Specific Prompt Requirement in SAM: A Single Generic Prompt for Segmenting Camouflaged Objects☆137Updated 4 months ago
- Official implementation of "Open-Vocabulary Multi-Label Classification via Multi-Modal Knowledge Transfer".☆124Updated 3 months ago
- Learning Semantic Relationship among Instances for Image-Text Matching, CVPR, 2023☆86Updated last year
- 【AAAI'2023 & IJCV】Transferring Vision-Language Models for Visual Recognition: A Classifier Perspective☆191Updated 8 months ago
- Official code for CVPR 2024 paper "Active Generalized Category Discovery"☆40Updated 4 months ago
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆78Updated last year
- Exploring Visual Prompts for Adapting Large-Scale Models☆273Updated 2 years ago
- [CVPR 2023] CLIP is Also an Efficient Segmenter: A Text-Driven Approach for Weakly Supervised Semantic Segmentation☆181Updated 5 months ago
- [NeurIPS 2022] Implementation of "AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition"☆337Updated 2 years ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆397Updated 4 months ago
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆96Updated 7 months ago
- [ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction☆182Updated last year
- 【CVPR'2023 Highlight & TPAMI】Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval?☆231Updated 2 months ago
- CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆388Updated last week
- Linguistic-Aware Patch Slimming Framework for Fine-grained Cross-Modal Alignment, CVPR, 2024☆80Updated 8 months ago
- [Survey] Awesome List of Mixup Augmentation and Beyond (https://arxiv.org/abs/2409.05202)☆140Updated 4 months ago
- A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.☆424Updated last week
- Multimodal Prompting with Missing Modalities for Visual Recognition, CVPR'23☆191Updated last year
- ☆176Updated 2 years ago