ByteDance-Seed / VeOmniLinks
VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo
☆1,283Updated this week
Alternatives and similar repositories for VeOmni
Users that are interested in VeOmni are comparing it to the libraries listed below
Sorting:
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆591Updated last month
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆639Updated 3 weeks ago
- Ring attention implementation with flash attention☆906Updated 2 months ago
- ☆431Updated 3 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆920Updated 7 months ago
- ☆817Updated 5 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆550Updated this week
- ☆205Updated 2 weeks ago
- Muon is Scalable for LLM Training☆1,354Updated 3 months ago
- slime is an LLM post-training framework for RL Scaling.☆2,407Updated last week
- An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models☆2,264Updated last week
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆322Updated 6 months ago
- [Preprint] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.☆495Updated last week
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,429Updated this week
- FlagScale is a large model toolkit based on open-sourced projects.☆404Updated last week
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆470Updated this week
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆765Updated this week
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆1,626Updated 6 months ago
- Long-RL: Scaling RL to Long Sequences (NeurIPS 2025)☆652Updated last month
- Fast inference from large lauguage models via speculative decoding☆848Updated last year
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,102Updated 4 months ago
- Official Repo for Open-Reasoner-Zero☆2,060Updated 5 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆499Updated 9 months ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆251Updated 4 months ago
- Speed Always Wins: A Survey on Efficient Architectures for Large Language Models☆357Updated this week
- ☆971Updated 3 weeks ago
- 青稞Talk☆160Updated last week
- A fork to add multimodal model training to open-r1☆1,416Updated 9 months ago
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆411Updated 2 months ago
- 📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉☆438Updated 2 months ago