Ola-Omni / OlaLinks
Ola: Pushing the Frontiers of Omni-Modal Language Model
☆354Updated last month
Alternatives and similar repositories for Ola
Users that are interested in Ola are comparing it to the libraries listed below
Sorting:
- Official Implementation for "Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition" (ICCV 2025)☆289Updated 6 months ago
- [NeurIPS 2024] An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions☆1,073Updated 9 months ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆246Updated 2 months ago
- [ICML 2025 Oral] An official implementation of VideoRoPE & VideoRoPE++☆176Updated last week
- ✨✨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracy☆292Updated 2 months ago
- ✨✨VITA-Audio: Fast Interleaved Cross-Modal Token Generation for Efficient Large Speech-Language Model☆623Updated 2 months ago
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆318Updated last month
- Vchitect-2.0: Parallel Transformer for Scaling Up Video Diffusion Models☆915Updated 4 months ago
- Liquid: Language Models are Scalable and Unified Multi-modal Generators☆608Updated 3 months ago
- Efficient Reasoning Vision Language Models☆337Updated last week
- a family of versatile and state-of-the-art video tokenizers.☆407Updated 3 months ago
- [CVPR 2025] The code for "VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM"☆248Updated last month
- ☆130Updated last month
- ☆236Updated 7 months ago
- ☆409Updated 4 months ago
- Official Repository of OmniCaptioner☆156Updated 3 months ago
- Matrix-Game: Interactive World Foundation Model☆823Updated last month
- [CVPR 2025] The First Investigation of CoT Reasoning (RL, TTS, Reflection) in Image Generation☆779Updated 2 months ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆577Updated last year
- Unified Autoregressive Modeling for Visual Understanding and Generation☆179Updated this week
- [ICCV 2025] SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree☆500Updated last week
- [ICML 2025] PyTorch Implementation of "OmniAudio: Generating Spatial Audio from 360-Degree Video"☆320Updated last month
- The codes about "Uni-MoE: Scaling Unified Multimodal Models with Mixture of Experts"☆745Updated 2 months ago
- Code for paper "Towards Understanding Camera Motions in Any Video"☆203Updated this week
- GPT-ImgEval: Evaluating GPT-4o’s state-of-the-art image generation capabilities☆285Updated 3 months ago
- Video generation from text&image, 1st-gen☆925Updated 2 months ago
- The first Large Audio Language Model that enables native in-depth thinking, which is trained on large-scale audio Chain-of-Thought data.☆239Updated 2 months ago
- This project is the official implementation of 'LLMGA: Multimodal Large Language Model based Generation Assistant', ECCV2024 Oral☆397Updated 2 months ago
- [ ICLR 2024 ] Official Codebase for "InstructCV: Instruction-Tuned Text-to-Image Diffusion Models as Vision Generalists"☆462Updated last year
- [ICML 2025] Official repository for paper "Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation"☆165Updated 2 months ago