X2FD / LVIS-INSTRUCT4VLinks
☆133Updated last year
Alternatives and similar repositories for LVIS-INSTRUCT4V
Users that are interested in LVIS-INSTRUCT4V are comparing it to the libraries listed below
Sorting:
- A collection of visual instruction tuning datasets.☆76Updated last year
- Official repo for StableLLAVA☆95Updated last year
- SVIT: Scaling up Visual Instruction Tuning☆163Updated last year
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆211Updated last year
- ☆64Updated last year
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆262Updated 11 months ago
- ☆91Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated last year
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆94Updated 11 months ago
- ☆149Updated 7 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆148Updated 6 months ago
- Official repository of MMDU dataset☆92Updated 8 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 8 months ago
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆34Updated 10 months ago
- ☆115Updated 10 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆117Updated 2 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆69Updated 4 months ago
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆86Updated 9 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆142Updated 7 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- ☆75Updated 6 months ago
- 【NeurIPS 2024】Dense Connector for MLLMs☆168Updated 8 months ago
- ☆66Updated 10 months ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆91Updated 5 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆117Updated 6 months ago
- LVBench: An Extreme Long Video Understanding Benchmark☆91Updated 9 months ago
- ☆99Updated last year
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆182Updated 8 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆155Updated 7 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR2024]☆218Updated 2 months ago