miccunifi / Cross-the-GapLinks
[ICLR 2025] - Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion
☆47Updated 2 months ago
Alternatives and similar repositories for Cross-the-Gap
Users that are interested in Cross-the-Gap are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆49Updated 11 months ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆43Updated 5 months ago
- [ECCV 2024] - Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation☆60Updated 7 months ago
- Easy wrapper for inserting LoRA layers in CLIP.☆33Updated last year
- [ECCV 2024] Official repository for "DataDream: Few-shot Guided Dataset Generation"☆39Updated 11 months ago
- [CVPR 2025] COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training☆21Updated 2 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆34Updated last year
- ☆21Updated last year
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆68Updated 2 months ago
- ☆42Updated last month
- Composed Video Retrieval☆58Updated last year
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆74Updated 2 weeks ago
- cliptrase☆35Updated 9 months ago
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆73Updated 11 months ago
- [ICLR 2025] See What You Are Told: Visual Attention Sink in Large Multimodal Models☆30Updated 4 months ago
- ☆42Updated 4 months ago
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attention☆35Updated 11 months ago
- ☆25Updated last year
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆131Updated last month
- [CVPR 2025] FLAIR: VLM with Fine-grained Language-informed Image Representations☆81Updated 2 months ago
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆73Updated 2 years ago
- FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding (NIPS24)☆23Updated 6 months ago
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆50Updated 2 months ago
- Augmenting with Language-guided Image Augmentation (ALIA)☆77Updated last year
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆83Updated last year
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆53Updated last year
- Diffusion-TTA improves pre-trained discriminative models such as image classifiers or segmentors using pre-trained generative models.☆74Updated last year
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆28Updated last year
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆41Updated 6 months ago
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆80Updated 10 months ago