Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs
☆1,000Mar 3, 2026Updated 3 weeks ago
Alternatives and similar repositories for veScale
Users that are interested in veScale are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆273Feb 2, 2026Updated last month
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 6 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,394Mar 11, 2026Updated 2 weeks ago
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated 10 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆653Jan 15, 2026Updated 2 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- DLRover: An Automatic Distributed Deep Learning System☆1,641Mar 16, 2026Updated last week
- FlashInfer: Kernel Library for LLM Serving☆5,194Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,953Updated this week
- Ring attention implementation with flash attention☆998Sep 10, 2025Updated 6 months ago
- Ongoing research training transformer models at scale☆15,744Updated this week
- A PyTorch native platform for training generative AI models☆5,162Updated this week
- A model compilation solution for various hardware☆469Aug 20, 2025Updated 7 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,231Updated this week
- Microsoft Collective Communication Library☆387Sep 20, 2023Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,745Updated this week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆490Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆950Oct 29, 2025Updated 4 months ago
- Perplexity GPU Kernels☆564Nov 7, 2025Updated 4 months ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆271Mar 19, 2026Updated last week
- Efficient and easy multi-instance LLM serving☆536Mar 12, 2026Updated 2 weeks ago
- Tile primitives for speedy kernels☆3,244Mar 17, 2026Updated last week
- LLM training technologies developed by kwai☆71Jan 21, 2026Updated 2 months ago
- nnScaler: Compiling DNN models for Parallel Training☆126Sep 23, 2025Updated 6 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- GLake: optimizing GPU memory management and IO transmission.☆498Mar 24, 2025Updated last year
- Pipeline Parallelism for PyTorch☆786Aug 21, 2024Updated last year
- Best practice for training LLaMA models in Megatron-LM☆663Jan 2, 2024Updated 2 years ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆487Updated this week
- Microsoft Automatic Mixed Precision Library☆635Dec 1, 2025Updated 3 months ago
- NCCL Profiling Kit☆152Jul 1, 2024Updated last year
- ☆358Jan 28, 2026Updated last month
- Optimized primitives for collective multi-GPU communication☆4,562Updated this week
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,572Mar 18, 2026Updated last week
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Scripts for managing a large H100 cluster and fixing hardware issues to ensure smooth model training.☆323Aug 20, 2024Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆978Mar 6, 2026Updated 2 weeks ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,236Aug 14, 2025Updated 7 months ago
- Training and serving large-scale neural networks with auto parallelization.☆3,187Dec 9, 2023Updated 2 years ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,829Updated this week
- FlagGems is an operator library for large language models implemented in the Triton Language.☆926Updated this week
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year