microsoft / varuna
☆236Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for varuna
- Implementation of a Transformer, but completely in Triton☆248Updated 2 years ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆146Updated this week
- Research and development for optimizing transformers☆124Updated 3 years ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆163Updated this week
- Latency and Memory Analysis of Transformer Models for Training and Inference☆352Updated 5 months ago
- Applied AI experiments and examples for PyTorch☆159Updated last week
- ☆88Updated 2 months ago
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆196Updated 2 months ago
- This repository contains the experimental PyTorch native float8 training UX☆211Updated 3 months ago
- A schedule language for large model training☆141Updated 4 months ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆116Updated 2 years ago
- Torch Distributed Experimental☆116Updated 3 months ago
- A library to analyze PyTorch traces.☆297Updated this week
- Cataloging released Triton kernels.☆132Updated 2 months ago
- ☆65Updated 3 years ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆96Updated this week
- ☆91Updated 2 years ago
- ☆109Updated 7 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆184Updated 3 weeks ago
- Pipeline Parallelism for PyTorch☆725Updated 2 months ago
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- Training neural networks in TensorFlow 2.0 with 5x less memory☆128Updated 2 years ago
- Blazing fast training of 🤗 Transformers on Graphcore IPUs☆81Updated 7 months ago
- Zero Bubble Pipeline Parallelism☆279Updated this week
- Microsoft Automatic Mixed Precision Library☆522Updated last month
- A tensor-aware point-to-point communication primitive for machine learning☆247Updated last year
- Explorations into some recent techniques surrounding speculative decoding☆209Updated last year
- ☆147Updated 4 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆34Updated 2 years ago
- ☆140Updated this week