☆220Aug 17, 2023Updated 2 years ago
Alternatives and similar repositories for veGiantModel
Users that are interested in veGiantModel are comparing it to the libraries listed below
Sorting:
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Mar 31, 2023Updated 2 years ago
- A high performance and generic framework for distributed DNN training☆3,718Oct 3, 2023Updated 2 years ago
- gossip: Efficient Communication Primitives for Multi-GPU Systems☆62Jul 1, 2022Updated 3 years ago
- OneFlow Serving☆20Apr 10, 2025Updated 11 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆477Mar 15, 2024Updated 2 years ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆1,000Mar 3, 2026Updated 2 weeks ago
- A tensor-aware point-to-point communication primitive for machine learning☆284Dec 17, 2025Updated 3 months ago
- High performance distributed framework for training deep learning recommendation models based on PyTorch.☆411Jun 14, 2025Updated 9 months ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,542Jul 18, 2025Updated 8 months ago
- BytePS examples (Vision, NLP, GAN, etc)☆19Nov 24, 2022Updated 3 years ago
- Bagua Speeds up PyTorch☆884Aug 1, 2024Updated last year
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆921Dec 30, 2024Updated last year
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆336Dec 13, 2025Updated 3 months ago
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,302May 16, 2023Updated 2 years ago
- ParaGen is a PyTorch deep learning framework for parallel sequence generation.☆186Nov 21, 2022Updated 3 years ago
- PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.☆778Nov 18, 2025Updated 4 months ago
- Collective communications library with various primitives for multi-machine training.☆1,405Mar 11, 2026Updated last week
- A library developed by Volcano Engine for high-performance reading and writing of PyTorch model files.☆25Jan 2, 2025Updated last year
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆170Feb 11, 2026Updated last month
- Simple Distributed Deep Learning on TensorFlow☆134Feb 5, 2026Updated last month
- Transformer related optimization, including BERT, GPT☆6,397Mar 27, 2024Updated last year
- HugeCTR is a high efficiency GPU framework designed for Click-Through-Rate (CTR) estimating training☆1,052Mar 12, 2026Updated last week
- A schedule language for large model training☆152Aug 21, 2025Updated 6 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆406Jul 31, 2025Updated 7 months ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,864Mar 12, 2026Updated last week
- Ongoing research training transformer models at scale☆15,647Updated this week
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆9,389Dec 4, 2025Updated 3 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,233Aug 14, 2025Updated 7 months ago
- A lightweight parameter server interface☆1,561Mar 2, 2026Updated 2 weeks ago
- PyTorch extensions for high performance and large scale training.☆3,403Apr 26, 2025Updated 10 months ago
- PyTorch compilation tutorial covering TorchScript, torch.fx, and Slapo☆17Mar 13, 2023Updated 3 years ago
- ☆79Dec 15, 2023Updated 2 years ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆98Apr 22, 2023Updated 2 years ago
- High performance NCCL plugin for Bagua.☆15Sep 15, 2021Updated 4 years ago
- ☆44Sep 6, 2021Updated 4 years ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆122Nov 15, 2023Updated 2 years ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆66Mar 21, 2022Updated 3 years ago
- Slicing a PyTorch Tensor Into Parallel Shards☆300Jun 7, 2025Updated 9 months ago