Code for "CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning"
☆32Mar 26, 2025Updated 11 months ago
Alternatives and similar repositories for CAFe
Users that are interested in CAFe are comparing it to the libraries listed below
Sorting:
- [AAAI 2026 Oral] The official code of "UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning"☆66Dec 8, 2025Updated 3 months ago
- The official repository of our paper "Reinforcing Video Reasoning with Focused Thinking"☆35Jun 12, 2025Updated 8 months ago
- LLMBind: A Unified Modality-Task Integration Framework☆19Jun 16, 2024Updated last year
- ☆38Jan 12, 2026Updated last month
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆77May 23, 2025Updated 9 months ago
- ☆10Jul 5, 2024Updated last year
- ☆11Apr 10, 2024Updated last year
- [ICCV 2023] Going Beyond Nouns With Vision & Language Models Using Synthetic Data☆14Sep 30, 2023Updated 2 years ago
- Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?☆16Jun 3, 2025Updated 9 months ago
- ☆11Oct 2, 2024Updated last year
- Project for SNARE benchmark☆11Jun 5, 2024Updated last year
- [CVPR 2025] COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training☆38Mar 27, 2025Updated 11 months ago
- Towards a Unified View on Visual Parameter-Efficient Transfer Learning☆26Oct 13, 2022Updated 3 years ago
- [CVPR 2025] LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant☆178Jul 7, 2025Updated 8 months ago
- Code and data for the paper "Emergent Visual-Semantic Hierarchies in Image-Text Representations" (ECCV 2024)☆34Aug 12, 2024Updated last year
- This is the repository for "SELECT: A Large-Scale Benchmark of Data Curation Strategies for Image Recognition"☆16Oct 8, 2024Updated last year
- PyTorch implementation of "Sample- and Parameter-Efficient Auto-Regressive Image Models" from CVPR 2025☆14Nov 21, 2025Updated 3 months ago
- The official repo for the DanQing dataset.☆30Jan 16, 2026Updated last month
- Code and datasets for "Text encoders are performance bottlenecks in contrastive vision-language models". Coming soon!☆11May 24, 2023Updated 2 years ago
- ☆14Dec 31, 2024Updated last year
- GeckoNum Benchmark for T2I Model Eval.☆15Dec 5, 2024Updated last year
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆31May 16, 2024Updated last year
- Enhancing Multimodal Compositional Reasoning of Visual Language Models with Generative Negative Mining, WACV 2024☆14Jan 3, 2024Updated 2 years ago
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆21Oct 8, 2024Updated last year
- ☆27Feb 25, 2026Updated last week
- ☆14Jul 5, 2024Updated last year
- ☆15May 23, 2022Updated 3 years ago
- Implements VAR+CLIP for text-to-image (T2I) generation☆147Jan 23, 2025Updated last year
- official code repo for paper "Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging"☆23Oct 11, 2025Updated 4 months ago
- ☆17Dec 13, 2023Updated 2 years ago
- ☆20Apr 23, 2024Updated last year
- Does Understanding Inform Generation in Unified Multimodal Models? From Analysis to Path Forward☆60Nov 27, 2025Updated 3 months ago
- This is the implementation of CounterCurate, the data curation pipeline of both physical and semantic counterfactual image-caption pairs.☆19Jun 27, 2024Updated last year
- E5-V: Universal Embeddings with Multimodal Large Language Models☆274Dec 10, 2025Updated 2 months ago
- [ICLR'25] PiCO: Peer Review in LLMs based on the Consistency Optimization, https://arxiv.org/pdf/2402.01830☆36Feb 16, 2025Updated last year
- SIEVE: Multimodal Dataset Pruning using Image-Captioning Models (CVPR 2024)☆18Apr 28, 2024Updated last year
- If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions☆17Apr 4, 2024Updated last year
- ☆16Jan 3, 2023Updated 3 years ago
- [ECCV 2024] FlexAttention for Efficient High-Resolution Vision-Language Models☆46Jan 8, 2025Updated last year