deepglint / UniMELinks
[ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"
☆92Updated last month
Alternatives and similar repositories for UniME
Users that are interested in UniME are comparing it to the libraries listed below
Sorting:
- The Next Step Forward in Multimodal LLM Alignment☆181Updated 5 months ago
- Official repository of MMDU dataset☆95Updated last year
- ☆72Updated 4 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆133Updated 7 months ago
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆66Updated 4 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆135Updated 5 months ago
- ☆119Updated last year
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆200Updated 6 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆177Updated 11 months ago
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆192Updated last week
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated last year
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 4 months ago
- ☆90Updated last year
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆143Updated 2 months ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆84Updated 10 months ago
- [NeurIPS 2025 Spotlight] Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆66Updated 2 weeks ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆115Updated 10 months ago
- Collect the awesome works evolved around reasoning models like O1/R1 in visual domain☆41Updated 2 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆295Updated last year
- [ICCV 2025] Official implementation of LLaVA-KD: A Framework of Distilling Multimodal Large Language Models☆98Updated 3 months ago
- Visual Instruction Tuning for Qwen2 Base Model☆38Updated last year
- SFT+RL boosts multimodal reasoning☆34Updated 3 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆116Updated last year
- [ICML 2024] | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆114Updated last year
- Official implement of MIA-DPO☆66Updated 8 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆73Updated last year
- A collection of visual instruction tuning datasets.☆76Updated last year
- The official implementation of RAR☆92Updated last year
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆226Updated 3 weeks ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆223Updated 3 months ago