yshinya6 / clip-refineLinks
Code repository for "Post-pre-training for Modality Alignment in Vision-Language Foundation Models" (CVPR2025)
☆24Updated 2 weeks ago
Alternatives and similar repositories for clip-refine
Users that are interested in clip-refine are comparing it to the libraries listed below
Sorting:
- FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding (NIPS24)☆24Updated 7 months ago
- [AAAI'25, CVPRW 2024] Official repository of paper titled "Learning to Prompt with Text Only Supervision for Vision-Language Models".☆111Updated 7 months ago
- [ICML2024] Official PyTorch implementation of CoMC: Language-Driven Cross-Modal Classifier for Zero-Shot Multi-Label Image Recognition☆14Updated last year
- AlignCLIP: Improving Cross-Modal Alignment in CLIP (ICLR 2025)☆43Updated 5 months ago
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆54Updated last year
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆74Updated 6 months ago
- Source code of our AAAI 2024 paper "Cross-Modal and Uni-Modal Soft-Label Alignment for Image-Text Retrieval"☆46Updated last year
- [CVPR 2025] COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training☆28Updated 4 months ago
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆106Updated last year
- [CVPR 2025] Hybrid Global-Local Representation with Augmented Spatial Guidance for Zero-Shot Referring Image Segmentation☆19Updated last month
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆169Updated last year
- [BMVC 2023] Zero-shot Composed Text-Image Retrieval☆53Updated 8 months ago
- Adaptation of vision-language models (CLIP) to downstream tasks using local and global prompts.☆47Updated 3 weeks ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆271Updated last year
- [AAAI2024] Official implementation of TGP-T☆28Updated last year
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆75Updated 3 months ago
- [NeurIPS 2024 Spotlight] Official implementation for "PACE: marrying generalization in PArameter-efficient fine-tuning with Consistency r…☆16Updated 3 months ago
- [CVPR 2024] Zero-shot method for Vision-Language Models based on a robust formulation of the MeanShift algorithm for Test-time Augmentati…☆59Updated 7 months ago
- ☆24Updated last year
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"☆47Updated last month
- [NeurIPS 2023] The official implementation of SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation☆32Updated last year
- An easy way to apply LoRA to CLIP. Implementation of the paper "Low-Rank Few-Shot Adaptation of Vision-Language Models" (CLIP-LoRA) [CVPR…☆234Updated 2 months ago
- Code and Dataset for the paper "LAMM: Label Alignment for Multi-Modal Prompt Learning" AAAI 2024☆33Updated last year
- The official repo for "Ref-AVS: Refer and Segment Objects in Audio-Visual Scenes", ECCV 2024☆45Updated 8 months ago
- ☆49Updated 2 months ago
- Pytorch implementation of "Test-time Adaption against Multi-modal Reliability Bias".☆37Updated 7 months ago
- Code for the paper Visual Explanations of Image–Text Representations via Multi-Modal Information Bottleneck Attribution☆54Updated last year
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆100Updated last year
- The repo for "MMPareto: Boosting Multimodal Learning with Innocent Unimodal Assistance", ICML 2024☆44Updated last year
- The official implementation of "Cross-modal Causal Relation Alignment for Video Question Grounding. (CVPR 2025 Highlight)"☆29Updated 3 months ago