[ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"
☆103Dec 8, 2025Updated 3 months ago
Alternatives and similar repositories for UniME
Users that are interested in UniME are comparing it to the libraries listed below
Sorting:
- [ACM MM25] Official Pytorch implementation of [Decoupled Global-Local Alignment for Improving Compositional Understanding]☆15Jul 15, 2025Updated 8 months ago
- [ACM MM2025] The official repository for the RealSyn dataset☆40Dec 14, 2025Updated 3 months ago
- Video Benchmark Suite: Rapid Evaluation of Video Foundation Models☆16Jan 10, 2025Updated last year
- ViCToR: Improving Visual Comprehension via Token Reconstruction for Pretraining LMMs☆29Aug 15, 2025Updated 7 months ago
- [AAAI 2026 Oral] The official code of "UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning"☆68Dec 8, 2025Updated 3 months ago
- The official repo for the DanQing dataset.☆32Jan 16, 2026Updated 2 months ago
- V-SWIFT: Training a Small VideoMAE Model on a Single Machine in a Day☆29Feb 5, 2025Updated last year
- MLCD-Seg is a zero-shot segmentation model from DeepGlint.☆17Jul 4, 2025Updated 8 months ago
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆77May 23, 2025Updated 9 months ago
- ☆24Oct 16, 2025Updated 5 months ago
- Fully Open Framework for Democratized Multimodal Reinforcement Learning.☆43Dec 19, 2025Updated 3 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆275Dec 10, 2025Updated 3 months ago
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR 2025]☆597Updated this week
- ☆32Feb 25, 2026Updated 3 weeks ago
- ABC: Achieving Better Control of Multimodal Embeddings using VLMs [TMLR2025]☆21Aug 21, 2025Updated 7 months ago
- Code for "CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning"☆33Mar 26, 2025Updated 11 months ago
- ☆39Jan 12, 2026Updated 2 months ago
- ☆13Feb 2, 2025Updated last year
- ☆14Apr 25, 2025Updated 10 months ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆153Dec 14, 2025Updated 3 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆178Oct 1, 2024Updated last year
- Collection of model-centric MCP servers☆26May 21, 2025Updated 10 months ago
- ☆58Feb 27, 2025Updated last year
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆245Nov 6, 2025Updated 4 months ago
- [CVPR 2025] LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant☆179Jul 7, 2025Updated 8 months ago
- The code implementation for UME-R1: Exploring Reasoning-Driven Generative Multimodal Embeddings (ICLR 2026).☆48Feb 25, 2026Updated 3 weeks ago
- [EMNLP 2024] Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality☆21Oct 8, 2024Updated last year
- Advances in recent large vision language models (LVLMs)☆15Sep 23, 2024Updated last year
- Ensemble Learning of Foundation Models☆17Aug 29, 2025Updated 6 months ago
- Official implementation of "Meta-Entity Driven Triplet Mining for Aligning Medical Vision-Language Models"☆14Mar 19, 2025Updated last year
- Official This-Is-My Dataset published in CVPR 2023☆16Jul 18, 2024Updated last year
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆82Nov 23, 2025Updated 3 months ago
- Large-Scale Visual Representation Model☆704Dec 8, 2025Updated 3 months ago
- [MICCAI‘25 Early Accept] MAKE: Multi-Aspect Knowledge-Enhanced Vision-Language Pretraining for Zero-shot Dermatological Assessment☆18Feb 27, 2026Updated 3 weeks ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆50Jan 14, 2025Updated last year
- [ICCV2025] Constructing Ophthalmic MLLM for Positioning-diagnosis Collaboration Through Clinical Cognitive Chain Reasoning☆23Nov 13, 2025Updated 4 months ago
- [NeurIPS24] VisMin: Visual Minimal-Change Understanding☆19Mar 3, 2025Updated last year
- The official code for paper "UniFashion: A Unified Vision-Language Model for Multimodal Fashion Retrieval and Generation"☆36Jul 29, 2024Updated last year
- ☆290Jul 29, 2025Updated 7 months ago