XMUDeepLIT / LLaVELinks
LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning
☆75Updated 8 months ago
Alternatives and similar repositories for LLaVE
Users that are interested in LLaVE are comparing it to the libraries listed below
Sorting:
- 【NeurIPS 2024】Dense Connector for MLLMs☆180Updated last year
- ☆37Updated 3 weeks ago
- The official implementation of RAR☆92Updated last month
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆93Updated 2 months ago
- ☆82Updated last year
- [CVPR 2025] LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant☆174Updated 6 months ago
- ☆25Updated last year
- Official repository of MMDU dataset☆103Updated last year
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆58Updated last year
- A collection of visual instruction tuning datasets.☆76Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆171Updated last year
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆136Updated 8 months ago
- ☆133Updated 2 years ago
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆55Updated 10 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆43Updated 10 months ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆98Updated last year
- ☆66Updated last year
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆246Updated last year
- Evaluation code for Ref-L4, a new REC benchmark in the LMM era☆55Updated last year
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆77Updated last year
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆34Updated last year
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆62Updated last year
- ✨✨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?☆152Updated 3 months ago
- ☆92Updated 2 years ago
- ☆124Updated last year
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Updated 2 years ago
- ☆80Updated last year
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆163Updated last year
- FreeVA: Offline MLLM as Training-Free Video Assistant☆68Updated last year
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆127Updated 10 months ago