VITA-MLLM / Long-VITA
✨✨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracy
☆275Updated last month
Alternatives and similar repositories for Long-VITA:
Users that are interested in Long-VITA are comparing it to the libraries listed below
- A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.☆199Updated last week
- Official Implementation for "Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition"☆283Updated 4 months ago
- [ICML 2025 Spotlight]An official implementation of VideoRoPE: What Makes for Good Video Rotary Position Embedding?☆139Updated 2 weeks ago
- ✨✨VITA-Audio: Fast Interleaved Cross-Modal Token Generation for Efficient Large Speech-Language Model☆75Updated this week
- A post-training method to enhance CLIP's fine-grained visual representations with generative models.☆48Updated last month
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generator☆107Updated last month
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Models☆174Updated 6 months ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM".☆246Updated 4 months ago
- Official repository of T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT☆193Updated this week
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆139Updated 5 months ago
- An open-source implementation for training LLaVA-NeXT.☆393Updated 6 months ago
- The Next Step Forward in Multimodal LLM Alignment☆149Updated last week
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cache☆42Updated 9 months ago
- ☆138Updated 2 weeks ago
- (NeurIPS 2024) Official PyTorch implementation of LOVA3☆83Updated last month
- u-LLaVA: Unifying Multi-Modal Tasks via Large Language Model☆131Updated 3 weeks ago
- GPT-ImgEval: Evaluating GPT-4o’s state-of-the-art image generation capabilities☆252Updated last week
- [NAACL 2025 Oral] 🎉 From redundancy to relevance: Enhancing explainability in multimodal large language models☆95Updated 2 months ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆94Updated last year
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆303Updated 2 months ago
- Multi-granularity Correspondence Learning from Long-term Noisy Videos [ICLR 2024, Oral]☆113Updated last year
- Official implementation of X-Prompt: Towards Universal In-Context Image Generation in Auto-Regressive Vision Language Foundation Models☆153Updated 5 months ago
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆83Updated last month
- Ola: Pushing the Frontiers of Omni-Modal Language Model☆334Updated 2 months ago
- ☆135Updated 4 months ago
- ☆104Updated 2 months ago
- [CVPR 2025] The code for "VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM"☆193Updated last week
- ☆102Updated last month
- CoS: Chain-of-Shot Prompting for Long Video Understanding☆47Updated 2 months ago
- [ICLR 2025] BiGR: Harnessing Binary Latent Codes for Image Generation and Improved Visual Representation Capabilities☆142Updated 3 months ago