[CVPR 2025] COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training
☆38Mar 27, 2025Updated 11 months ago
Alternatives and similar repositories for cosmos
Users that are interested in cosmos are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] FLAIR: VLM with Fine-grained Language-informed Image Representations☆130Sep 1, 2025Updated 5 months ago
- Code for "CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning"☆32Mar 26, 2025Updated 11 months ago
- ViCToR: Improving Visual Comprehension via Token Reconstruction for Pretraining LMMs☆28Aug 15, 2025Updated 6 months ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆50Jan 14, 2025Updated last year
- X-MIC: Cross-Modal Instance Conditioning for Egocentric Action Generalization, CVPR 2024☆11Nov 7, 2024Updated last year
- [ICML24] Official Implementation of "ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections"☆16May 31, 2024Updated last year
- GCPR 2023 - DeViL: Decoding Vision features into Language☆12Oct 16, 2023Updated 2 years ago
- FuseLIP: Multimodal Embeddings via Early Fusion of Discrete Tokens☆17Sep 8, 2025Updated 5 months ago
- [CVPR 23] Q: How to Specialize Large Vision-Language Models to Data-Scarce VQA Tasks? A: Self-Train on Unlabeled Images!☆17May 14, 2024Updated last year
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆45Apr 22, 2025Updated 10 months ago
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"☆57Aug 15, 2025Updated 6 months ago
- Fully Open Framework for Democratized Multimodal Reinforcement Learning.☆41Dec 19, 2025Updated 2 months ago
- ABC: Achieving Better Control of Multimodal Embeddings using VLMs [TMLR2025]☆20Aug 21, 2025Updated 6 months ago
- [CVPR 2024 CVinW] Multi-Agent VQA: Exploring Multi-Agent Foundation Models on Zero-Shot Visual Question Answering☆20Sep 21, 2024Updated last year
- [ECCV 2024] Official Release of SILC: Improving vision language pretraining with self-distillation☆47Oct 3, 2024Updated last year
- ☆58Feb 27, 2025Updated last year
- ☆37Jan 12, 2026Updated last month
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆52Jun 16, 2025Updated 8 months ago
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆138May 8, 2025Updated 9 months ago
- A universal foundation model for grounded biomedical image interpretation☆36Feb 1, 2026Updated 3 weeks ago
- [ICLR 2025] - Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion☆61Nov 30, 2025Updated 3 months ago
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆28Oct 28, 2024Updated last year
- [CVPR 2025] PyTorch implementation of paper "FLAME: Frozen Large Language Models Enable Data-Efficient Language-Image Pre-training"☆32Jul 8, 2025Updated 7 months ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆25Nov 23, 2024Updated last year
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆75May 23, 2025Updated 9 months ago
- Counterfactual Reasoning VQA Dataset☆27Nov 23, 2023Updated 2 years ago
- Code and data for the paper "Emergent Visual-Semantic Hierarchies in Image-Text Representations" (ECCV 2024)☆34Aug 12, 2024Updated last year
- [CVPR 2024] Guided Slot Attention for Unsupervised Video Object Segmentation☆64Dec 23, 2024Updated last year
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning☆45Jul 2, 2025Updated 7 months ago
- Generative Bias for Robust Visual Question Answering ( CVPR 2023 )☆28Jul 4, 2023Updated 2 years ago
- [ICLR 2025] Official code repository for "TULIP: Token-length Upgraded CLIP"☆33Jan 26, 2026Updated last month
- Using Graph Neural Networks to Segment MRIs of Brain Tumors☆29Aug 13, 2022Updated 3 years ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆31May 16, 2024Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆35Apr 27, 2023Updated 2 years ago
- Official implementation of "Referring Video Object Segmentation via Language Aligned Track Selection".☆40Jun 2, 2025Updated 8 months ago
- [CVPR2025] Official implementation of the paper "Multi-Layer Visual Feature Fusion in Multimodal LLMs: Methods, Analysis, and Best Practi…☆44Oct 29, 2025Updated 4 months ago
- Awesome Vision-Language Pretraining Papers☆40Jan 15, 2025Updated last year
- Hybrid Mamba for Few-Shot Segmentation (NIPS 2024)☆42Oct 1, 2024Updated last year
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆83Jul 1, 2024Updated last year