BAAI-DCAI / Visual-Instruction-TuningLinks
SVIT: Scaling up Visual Instruction Tuning
☆166Updated last year
Alternatives and similar repositories for Visual-Instruction-Tuning
Users that are interested in Visual-Instruction-Tuning are comparing it to the libraries listed below
Sorting:
- ☆133Updated 2 years ago
- A collection of visual instruction tuning datasets.☆76Updated last year
- ☆92Updated 2 years ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆280Updated last year
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆294Updated last year
- ☆155Updated last year
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆259Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated 2 years ago
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆213Updated last year
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆358Updated last year
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆71Updated 11 months ago
- ☆87Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated 2 years ago
- ☆101Updated 2 years ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆97Updated last year
- 【NeurIPS 2024】Dense Connector for MLLMs☆180Updated last year
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆317Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆158Updated last year
- [ICML 2024] | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆116Updated last year
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆320Updated last year
- ☆81Updated last year
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Updated 2 years ago
- ☆120Updated last year
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆86Updated last year
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆287Updated 2 years ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated 2 years ago
- ☆66Updated last year
- Official repository of MMDU dataset☆102Updated last year
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆153Updated 4 months ago
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆167Updated last year