ByteDance-Seed / VeOmniLinks
VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo
☆1,231Updated this week
Alternatives and similar repositories for VeOmni
Users that are interested in VeOmni are comparing it to the libraries listed below
Sorting:
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆582Updated last week
- Ring attention implementation with flash attention☆903Updated last month
- ☆428Updated 2 months ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆581Updated 2 weeks ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆903Updated 7 months ago
- ☆817Updated 4 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆537Updated last week
- An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models☆2,083Updated last week
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆323Updated 6 months ago
- slime is an LLM post-training framework for RL Scaling.☆2,232Updated this week
- ☆203Updated 6 months ago
- [Preprint] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.☆471Updated this week
- Muon is Scalable for LLM Training☆1,336Updated 2 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆362Updated last week
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,395Updated this week
- Fast inference from large lauguage models via speculative decoding☆841Updated last year
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,075Updated 3 months ago
- 青稞Talk☆151Updated last week
- Long-RL: Scaling RL to Long Sequences (NeurIPS 2025)☆644Updated last month
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆439Updated this week
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆1,597Updated 5 months ago
- Official Repo for Open-Reasoner-Zero☆2,054Updated 4 months ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆249Updated 3 months ago
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆389Updated last month
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆746Updated last week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆494Updated 8 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,145Updated last month
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆410Updated 2 months ago
- Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.☆2,853Updated this week
- A sparse attention kernel supporting mix sparse patterns☆322Updated 8 months ago