inclusionAI / MingLinks
Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.
☆569Updated 2 months ago
Alternatives and similar repositories for Ming
Users that are interested in Ming are comparing it to the libraries listed below
Sorting:
- ☆185Updated 11 months ago
- ☆288Updated 5 months ago
- NextStep-1: SOTA Autogressive Image Generation with Continuous Tokens. A research project developed by the StepFun’s Multimodal Intellige…☆594Updated 3 weeks ago
- ☆710Updated last month
- MiMo-VL☆619Updated 4 months ago
- GLM-Image: Auto-regressive for Dense-knowledge and High-fidelity Image Generation.☆524Updated this week
- Official implementation of UnifiedReward & [NeurIPS 2025] UnifiedReward-Think☆660Updated last week
- ☆145Updated 5 months ago
- [NeurIPS 2025 Spotlight] A Unified Tokenizer for Visual Generation and Understanding☆496Updated 2 months ago
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)☆75Updated 9 months ago
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆268Updated last month
- Multimodal Models in Real World☆552Updated 10 months ago
- The official repo for "Vidi: Large Multimodal Models for Video Understanding and Editing"☆556Updated last month
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆277Updated 3 months ago
- HumanOmni☆211Updated 10 months ago
- An unified model that seamlessly integrates multimodal understanding, text-to-image generation, and image editing within a single powerfu…☆445Updated last month
- video-SALMONN 2 is a powerful audio-visual large language model (LLM) that generates high-quality audio-visual video captions, which is d…☆136Updated 3 weeks ago
- LiveCC: Learning Video LLM with Streaming Speech Transcription at Scale (CVPR 2025)☆363Updated 2 months ago
- An official implementation of "CapRL: Stimulating Dense Image Caption Capabilities via Reinforcement Learning"☆172Updated 2 weeks ago
- UniWorld: High-Resolution Semantic Encoders for Unified Visual Understanding and Generation☆832Updated 3 weeks ago
- VARGPT-v1.1: Improve Visual Autoregressive Large Unified Model via Iterative Instruction Tuning and Reinforcement Learning☆269Updated 9 months ago
- This is the official repo for the paper "LongCat-Flash-Omni Technical Report"☆452Updated last month
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆272Updated 11 months ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆510Updated 5 months ago
- Kling-Foley: Multimodal Diffusion Transformer for High-Quality Video-to-Audio Generation☆62Updated 6 months ago
- [ICML 2025] Official PyTorch implementation of LongVU☆417Updated 8 months ago
- Official inference code and LongText-Bench benchmark for our paper X-Omni (https://arxiv.org/pdf/2507.22058).☆410Updated 4 months ago
- Long Context Transfer from Language to Vision☆398Updated 9 months ago
- 🔥🔥First-ever hour scale video understanding models☆599Updated 6 months ago
- ☆77Updated 8 months ago