deepglint / UniMELinks
[ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"
☆83Updated last month
Alternatives and similar repositories for UniME
Users that are interested in UniME are comparing it to the libraries listed below
Sorting:
- The Next Step Forward in Multimodal LLM Alignment☆170Updated 3 months ago
- ☆86Updated last year
- Official repository of MMDU dataset☆92Updated 10 months ago
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuning☆56Updated 2 months ago
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆129Updated 5 months ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆114Updated this week
- ☆67Updated 2 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆162Updated 4 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆165Updated 10 months ago
- ☆25Updated last week
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated 10 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆129Updated 3 months ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆79Updated 8 months ago
- ☆118Updated last year
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆89Updated 2 months ago
- A collection of visual instruction tuning datasets.☆76Updated last year
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆63Updated 2 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆186Updated 4 months ago
- [ICCV'25] Explore the Limits of Omni-modal Pretraining at Scale☆111Updated 11 months ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆41Updated 3 months ago
- Pixel-Level Reasoning Model trained with RL☆180Updated last month
- 【NeurIPS 2024】Dense Connector for MLLMs☆171Updated 9 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆156Updated 10 months ago
- Official implement of MIA-DPO☆62Updated 6 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆287Updated 10 months ago
- Official implementation of paper AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding☆77Updated 3 months ago
- The official implementation of RAR☆90Updated last year
- The official repo for “TextCoT: Zoom In for Enhanced Multimodal Text-Rich Image Understanding”.☆41Updated 10 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆84Updated last month
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆105Updated 2 months ago