NVlabs / EAGLE
Eagle Family: Exploring Model Designs, Data Recipes and Training Strategies for Frontier-Class Multimodal LLMs
☆607Updated 3 weeks ago
Alternatives and similar repositories for EAGLE:
Users that are interested in EAGLE are comparing it to the libraries listed below
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆541Updated 8 months ago
- ☆372Updated 2 months ago
- An open-source implementation for training LLaVA-NeXT.☆378Updated 3 months ago
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆289Updated this week
- 🔥 Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos☆884Updated last week
- [NeurIPS 2024] An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions☆1,039Updated 4 months ago
- ☆216Updated 2 months ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM".☆236Updated last month
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆253Updated 10 months ago
- Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.☆2,116Updated this week
- [ECCV 2024] Official PyTorch implementation code for realizing the technical part of Mixture of All Intelligence (MoAI) to improve perfor…☆318Updated 10 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Feature Pyramid via Hierarchical Window Transformer☆366Updated last month
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆470Updated last month
- ☆345Updated 8 months ago
- Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition☆276Updated last month
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆202Updated 5 months ago
- [ICLR 2025] Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,220Updated last week
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆313Updated 7 months ago
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?☆153Updated 4 months ago
- [NeurIPS 2024]OmniTokenizer: one model and one weight for image-video joint tokenization.☆279Updated 7 months ago
- Rethinking Step-by-step Visual Reasoning in LLMs☆247Updated 3 weeks ago
- Official repository for the paper PLLaVA☆638Updated 6 months ago
- Real-time and accurate open-vocabulary end-to-end object detection☆1,167Updated 2 months ago
- a family of versatile and state-of-the-art video tokenizers.☆337Updated last month
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆722Updated last year
- Code for the Molmo Vision-Language Model☆292Updated 2 months ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆516Updated 9 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆229Updated 3 weeks ago
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆721Updated 6 months ago