FreedomIntelligence / ALLaVA
Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model
☆246Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for ALLaVA
- SVIT: Scaling up Visual Instruction Tuning☆163Updated 5 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" (TMLR2024)☆184Updated this week
- A collection of visual instruction tuning datasets.☆76Updated 8 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆235Updated 2 months ago
- ☆131Updated 10 months ago
- [NeurIPS'24 Spotlight] EVE: Encoder-Free Vision-Language Models☆231Updated last month
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆255Updated 8 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆315Updated 4 months ago
- ☆289Updated 9 months ago
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆194Updated 8 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆267Updated 2 weeks ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆102Updated 3 weeks ago
- ☆58Updated 9 months ago
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆162Updated 2 months ago
- OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆274Updated this week
- A RLHF Infrastructure for Vision-Language Models☆104Updated this week
- Official repository of MMDU dataset☆75Updated last month
- ☆121Updated 3 weeks ago
- ICML'2024 | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆95Updated 4 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆131Updated this week
- 【NeurIPS 2024】Dense Connector for MLLMs☆140Updated last month
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆148Updated last month
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆142Updated last week
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆55Updated last month
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆132Updated last month
- LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images☆318Updated last month
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆301Updated 7 months ago
- ☆73Updated 8 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆274Updated 3 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆182Updated 7 months ago