NVlabs / EAGLE
Eagle Family: Exploring Model Designs, Data Recipes and Training Strategies for Frontier-Class Multimodal LLMs
☆765Updated last week
Alternatives and similar repositories for EAGLE:
Users that are interested in EAGLE are comparing it to the libraries listed below
- ☆382Updated 4 months ago
- An open-source implementation for training LLaVA-NeXT.☆393Updated 6 months ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆563Updated 11 months ago
- Liquid: Language Models are Scalable and Unified Multi-modal Generators☆555Updated last month
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆303Updated 2 months ago
- [ICLR 2025] Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,380Updated last week
- Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.☆2,416Updated last week
- 🔥 Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos☆1,070Updated last week
- [CVPR 2025] The First Investigation of CoT Reasoning in Image Generation☆651Updated last month
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆376Updated 2 weeks ago
- Rethinking Step-by-step Visual Reasoning in LLMs☆292Updated 3 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆508Updated last month
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆542Updated this week
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆565Updated last year
- [NeurIPS 2024] An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions☆1,055Updated 6 months ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM".☆246Updated 4 months ago
- Explore the Multimodal “Aha Moment” on 2B Model☆583Updated last month
- ☆344Updated 11 months ago
- Official Implementation for "Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition"☆283Updated 4 months ago
- A family of lightweight multimodal models.☆1,015Updated 5 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆739Updated last year
- Code for the Molmo Vision-Language Model☆407Updated 4 months ago
- a family of versatile and state-of-the-art video tokenizers.☆382Updated last month
- ☆228Updated 5 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆489Updated last week
- Ola: Pushing the Frontiers of Omni-Modal Language Model☆334Updated 2 months ago
- [CVPR 2025] The code for "VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM"☆193Updated last week
- Long Context Transfer from Language to Vision☆374Updated last month
- The codes about "Uni-MoE: Scaling Unified Multimodal Models with Mixture of Experts"☆718Updated 3 weeks ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆237Updated 8 months ago