XMUDeepLIT / LLaVELinks
LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning
☆73Updated 6 months ago
Alternatives and similar repositories for LLaVE
Users that are interested in LLaVE are comparing it to the libraries listed below
Sorting:
- Official repository of MMDU dataset☆98Updated last year
- 【NeurIPS 2024】Dense Connector for MLLMs☆180Updated last year
- A collection of visual instruction tuning datasets.☆76Updated last year
- ☆81Updated last year
- ☆25Updated last year
- The official implementation of RAR☆92Updated last year
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆58Updated last year
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆91Updated this week
- [CVPR 2025] LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant☆172Updated 4 months ago
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆136Updated 6 months ago
- Official implement of MIA-DPO☆67Updated 10 months ago
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆55Updated 8 months ago
- [ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆96Updated 3 weeks ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆77Updated last year
- ☆123Updated last year
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆277Updated last year
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆60Updated last year
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆35Updated 8 months ago
- ☆30Updated last month
- ☆91Updated 2 years ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆147Updated last year
- Visual Instruction Tuning for Qwen2 Base Model☆40Updated last year
- [ICML 2024] Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning☆50Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆168Updated last year
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆156Updated 2 months ago
- ☆66Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆84Updated 10 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆151Updated last year
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Updated 2 years ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆95Updated 10 months ago