LLaVA-Annonymous / LLaVA
☆28Updated last year
Alternatives and similar repositories for LLaVA:
Users that are interested in LLaVA are comparing it to the libraries listed below
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆109Updated 8 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆137Updated last week
- A collection of visual instruction tuning datasets.☆76Updated 10 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆78Updated 8 months ago
- Official repo for StableLLAVA☆94Updated last year
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆76Updated 9 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆25Updated 6 months ago
- ☆132Updated last year
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆91Updated 2 months ago
- VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆91Updated 6 months ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆56Updated last year
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 3 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆112Updated 6 months ago
- The official code of the paper "PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction".☆51Updated last week
- SVIT: Scaling up Visual Instruction Tuning☆164Updated 7 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆146Updated 2 months ago
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆137Updated 5 months ago
- ☆87Updated last year
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆96Updated 2 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆77Updated 9 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆60Updated 7 months ago
- A Survey on Benchmarks of Multimodal Large Language Models☆79Updated 2 weeks ago
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆61Updated 3 months ago
- ☆61Updated 6 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆154Updated 3 months ago
- ☆24Updated 8 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆40Updated 6 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆161Updated 3 months ago
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆103Updated 7 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆252Updated 6 months ago