VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo
☆1,676Mar 1, 2026Updated this week
Alternatives and similar repositories for VeOmni
Users that are interested in VeOmni are comparing it to the libraries listed below
Sorting:
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆270Feb 2, 2026Updated last month
- Distributed Compiler based on Triton for Parallel Systems☆1,371Feb 13, 2026Updated 3 weeks ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,519Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆644Jan 15, 2026Updated last month
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,264Aug 28, 2025Updated 6 months ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆984Updated this week
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆659Updated this week
- Ring attention implementation with flash attention☆987Sep 10, 2025Updated 5 months ago
- ☆813Jun 9, 2025Updated 8 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,474Updated this week
- A PyTorch native platform for training generative AI models☆5,098Feb 28, 2026Updated last week
- Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.☆3,586Updated this week
- VideoSys: An easy and efficient system for video generation☆2,016Aug 27, 2025Updated 6 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,084Updated this week
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,549Feb 26, 2026Updated last week
- FlashInfer: Kernel Library for LLM Serving☆5,057Updated this week
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,551Jun 14, 2025Updated 8 months ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆4,649Feb 26, 2026Updated last week
- Ongoing research training transformer models at scale☆15,461Updated this week
- slime is an LLM post-training framework for RL Scaling.☆4,536Updated this week
- Open-source unified multimodal model☆5,704Oct 27, 2025Updated 4 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,176Feb 28, 2026Updated last week
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,887Jan 8, 2026Updated last month
- Efficient Triton Kernels for LLM Training☆6,189Updated this week
- Fast and memory-efficient exact attention☆22,460Updated this week
- ☆453Aug 10, 2025Updated 6 months ago
- My learning notes for ML SYS.☆5,444Jan 30, 2026Updated last month
- MMaDA - Open-Sourced Multimodal Large Diffusion Language Models (dLLMs with block diffusion, mixed-CoT, unified RL)☆1,591Feb 14, 2026Updated 3 weeks ago
- Next-Token Prediction is All You Need☆2,355Jan 12, 2026Updated last month
- Official Repo for Open-Reasoner-Zero☆2,087Jun 2, 2025Updated 9 months ago
- A unified inference and post-training framework for accelerated video generation.☆3,111Feb 28, 2026Updated last week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,843Updated this week
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,073Apr 3, 2025Updated 11 months ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,164Jul 15, 2025Updated 7 months ago
- Long-RL: Scaling RL to Long Sequences (NeurIPS 2025)☆693Sep 24, 2025Updated 5 months ago
- ☆1,137Nov 20, 2025Updated 3 months ago
- Official implementation of BLIP3o-Series☆1,637Nov 29, 2025Updated 3 months ago
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,534Dec 15, 2025Updated 2 months ago
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.☆2,926Jan 14, 2026Updated last month