deepglint / UniMELinks
[ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"
☆96Updated 2 weeks ago
Alternatives and similar repositories for UniME
Users that are interested in UniME are comparing it to the libraries listed below
Sorting:
- The Next Step Forward in Multimodal LLM Alignment☆193Updated 7 months ago
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆75Updated 7 months ago
- Official repository of MMDU dataset☆99Updated last year
- 【NeurIPS 2024】Dense Connector for MLLMs☆181Updated last year
- ☆90Updated last year
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆150Updated 2 months ago
- ☆37Updated last year
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆58Updated last year
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆211Updated 3 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 7 months ago
- ☆74Updated 7 months ago
- SFT+RL boosts multimodal reasoning☆41Updated 6 months ago
- Official PyTorch Implementation of MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced …☆89Updated last year
- ☆124Updated last year
- ☆41Updated 4 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆299Updated last year
- A collection of visual instruction tuning datasets.☆76Updated last year
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆121Updated last year
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆256Updated last month
- [TMLR 25] SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆145Updated 2 months ago
- This project aims to collect and collate various datasets for multimodal large model training, including but not limited to pre-training …☆63Updated 7 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆135Updated 4 months ago
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆165Updated 3 weeks ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆201Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆168Updated last year
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆41Updated 8 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆119Updated last year
- Official implement of MIA-DPO☆69Updated 11 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆59Updated last year
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆39Updated 8 months ago