DCDmllm / CheetahLinks
☆342Updated last year
Alternatives and similar repositories for Cheetah
Users that are interested in Cheetah are comparing it to the libraries listed below
Sorting:
- ☆399Updated 9 months ago
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆258Updated last year
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?☆169Updated 5 months ago
- An open-source implementation for training LLaVA-NeXT.☆422Updated 11 months ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆151Updated 10 months ago
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆326Updated 3 months ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025☆269Updated 4 months ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆96Updated last year
- Official Repository of ChartX & ChartVLM: A Versatile Benchmark and Foundation Model for Complicated Chart Reasoning☆241Updated last year
- Eagle: Frontier Vision-Language Models with Data-Centric Strategies☆876Updated last month
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆87Updated 3 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆310Updated 8 months ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆524Updated last year
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆577Updated last year
- [NAACL 2025 Oral] 🎉 From redundancy to relevance: Enhancing explainability in multimodal large language models☆119Updated 7 months ago
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generator☆114Updated 6 months ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆269Updated last year
- WorldGPT: Empowering LLM as Multimodal World Model☆117Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆227Updated 6 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆349Updated 8 months ago
- The repository for the paper titled "Leopard: A Vision Language Model For Text-Rich Multi-Image Tasks"☆158Updated 9 months ago
- [ACL 2023 Findings] FACTUAL dataset, the textual scene graph parser trained on FACTUAL.☆116Updated 3 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆211Updated 9 months ago
- HPT - Open Multimodal LLMs from HyperGAI☆315Updated last year
- Evaluating Vision & Language Pretraining Models with Objects, Attributes and Relations. [EMNLP 2022]☆135Updated last year
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆466Updated last year
- Liquid: Language Models are Scalable and Unified Multi-modal Generators☆616Updated 5 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆331Updated last year
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆154Updated last year
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆497Updated last year