microsoft / varuna
☆246Updated 7 months ago
Alternatives and similar repositories for varuna:
Users that are interested in varuna are comparing it to the libraries listed below
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆154Updated 2 months ago
- Research and development for optimizing transformers☆125Updated 4 years ago
- Implementation of a Transformer, but completely in Triton☆259Updated 2 years ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆188Updated this week
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆206Updated 6 months ago
- ☆100Updated 6 months ago
- A library to analyze PyTorch traces.☆340Updated last week
- Cataloging released Triton kernels.☆176Updated last month
- This repository contains the experimental PyTorch native float8 training UX☆221Updated 7 months ago
- Applied AI experiments and examples for PyTorch☆232Updated this week
- A schedule language for large model training☆144Updated 8 months ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆116Updated 3 years ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆106Updated 3 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆201Updated 3 months ago
- ☆92Updated 2 years ago
- Torch Distributed Experimental☆115Updated 6 months ago
- ☆72Updated 3 years ago
- Pipeline Parallelism for PyTorch☆753Updated 6 months ago
- Collection of kernels written in Triton language☆107Updated last week
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆93Updated last week
- Fast low-bit matmul kernels in Triton☆250Updated last week
- Zero Bubble Pipeline Parallelism☆345Updated 3 weeks ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- ☆159Updated 8 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆138Updated this week
- extensible collectives library in triton☆83Updated 5 months ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆392Updated this week
- ☆79Updated 2 years ago
- ☆116Updated 11 months ago