OFA-Sys / TouchStone
Touchstone: Evaluating Vision-Language Models by Language Models
☆81Updated last year
Alternatives and similar repositories for TouchStone:
Users that are interested in TouchStone are comparing it to the libraries listed below
- ☆132Updated last year
- ☆95Updated last year
- ☆59Updated 11 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆109Updated 2 months ago
- Official repository of MMDU dataset☆82Updated 4 months ago
- ☆87Updated last year
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆61Updated 3 months ago
- Official repo for StableLLAVA☆94Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆57Updated last year
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆43Updated 7 months ago
- ☆47Updated last year
- InstructionGPT-4☆38Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆40Updated 7 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆80Updated 2 weeks ago
- Official github repo of G-LLaVA☆122Updated 8 months ago
- LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation☆126Updated last year
- SVIT: Scaling up Visual Instruction Tuning☆164Updated 7 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆128Updated 2 months ago
- ☆136Updated 2 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆252Updated 7 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆27Updated last month
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆84Updated 4 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆25Updated 7 months ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆76Updated 7 months ago
- A collection of visual instruction tuning datasets.☆76Updated 10 months ago
- The official code for paper "EasyGen: Easing Multimodal Generation with a Bidirectional Conditional Diffusion Model and LLMs"☆73Updated 2 months ago
- ☆17Updated 11 months ago
- ICML'2024 | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆98Updated 6 months ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆63Updated last year
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆68Updated this week