DCDmllm / CheetahLinks
☆344Updated last year
Alternatives and similar repositories for Cheetah
Users that are interested in Cheetah are comparing it to the libraries listed below
Sorting:
- ☆384Updated 5 months ago
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?☆163Updated last month
- An open-source implementation for training LLaVA-NeXT.☆397Updated 7 months ago
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆259Updated last year
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆308Updated 3 months ago
- Eagle Family: Exploring Model Designs, Data Recipes and Training Strategies for Frontier-Class Multimodal LLMs☆788Updated last month
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆302Updated 4 months ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆565Updated last year
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025☆249Updated last week
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆486Updated 9 months ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆142Updated 6 months ago
- WorldGPT: Empowering LLM as Multimodal World Model☆116Updated 10 months ago
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆280Updated last year
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆456Updated last year
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆93Updated last year
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆321Updated 10 months ago
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆84Updated 2 months ago
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generator☆110Updated 2 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆443Updated 6 months ago
- [ECCV 2024] Bridging Different Language Models and Generative Vision Models for Text-to-Image Generation☆293Updated 10 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆342Updated 4 months ago
- Research Trends in LLM-guided Multimodal Learning.☆358Updated last year
- Long Context Transfer from Language to Vision☆378Updated 2 months ago
- [TMLR23] Official implementation of UnIVAL: Unified Model for Image, Video, Audio and Language Tasks.☆228Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆350Updated last year
- HPT - Open Multimodal LLMs from HyperGAI☆316Updated last year
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆267Updated 11 months ago
- Aligning LMMs with Factually Augmented RLHF☆365Updated last year
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆149Updated last year
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆379Updated last month