NVlabs / EAGLE
EAGLE: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders
☆539Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for EAGLE
- ☆368Updated 6 months ago
- MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆289Updated this week
- An open-source implementation for training LLaVA-NeXT.☆395Updated 3 weeks ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆564Updated 5 months ago
- [NeurIPS 2024] An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions☆1,265Updated last month
- ☆356Updated 5 months ago
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆269Updated 7 months ago
- SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree☆283Updated 2 weeks ago
- LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images☆318Updated last month
- Vchitect-2.0: Parallel Transformer for Scaling Up Video Diffusion Models☆647Updated 2 months ago
- Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,029Updated last week
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM".☆214Updated 3 weeks ago
- [NeurIPS 2024]OmniTokenizer: one model and one weight for image-video joint tokenization.☆261Updated 4 months ago
- Long Context Transfer from Language to Vision☆334Updated 3 weeks ago
- [ECCV 2024] Official PyTorch implementation code for realizing the technical part of Mixture of All Intelligence (MoAI) to improve perfor…☆310Updated 7 months ago
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?☆149Updated last month
- Official repository for the paper PLLaVA☆593Updated 3 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆297Updated 4 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆242Updated this week
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆173Updated 2 months ago
- When do we not need larger vision models?☆336Updated this week
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆179Updated last month
- ☆573Updated 9 months ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆217Updated 3 months ago
- Cobra: Extending Mamba to Multi-modal Large Language Model for Efficient Inference☆257Updated 3 months ago
- LLaVA-HR: High-Resolution Large Language-Vision Assistant☆212Updated 3 months ago
- This project is the official implementation of 'LLMGA: Multimodal Large Language Model based Generation Assistant', ECCV2024 Oral☆462Updated 3 months ago
- ☆278Updated 2 weeks ago
- The official implementation of Self-Play Preference Optimization (SPPO)☆498Updated 3 months ago
- Accelerating the development of large multimodal models (LMMs) with lmms-eval☆2,068Updated this week