boweiliu / ncclLinks
Optimized primitives for collective multi-GPU communication
☆10Updated last year
Alternatives and similar repositories for nccl
Users that are interested in nccl are comparing it to the libraries listed below
Sorting:
- ☆21Updated 11 months ago
- ☆323Updated last year
- ☆20Updated 2 years ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 8 months ago
- Two implementations of ZeRO-1 optimizer sharding in JAX☆14Updated 2 years ago
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆10Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆82Updated 2 years ago
- PyTorch centric eager mode debugger☆48Updated last year
- ring-attention experiments☆165Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆32Updated 8 months ago
- This repository hosts code that supports the testing infrastructure for the PyTorch organization. For example, this repo hosts the logic …☆105Updated this week
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆226Updated last year
- train with kittens!☆63Updated last year
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆280Updated 2 months ago
- JAX bindings for Flash Attention v2☆103Updated last week
- ☆47Updated 2 years ago
- Minimal but scalable implementation of large language models in JAX☆35Updated 2 months ago
- A bunch of kernels that might make stuff slower 😉☆75Updated last week
- A library for unit scaling in PyTorch☆133Updated 7 months ago
- A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/…☆34Updated 11 months ago
- ☆124Updated last year
- extensible collectives library in triton☆95Updated 10 months ago
- Tokamax: A GPU and TPU kernel library.☆170Updated this week
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆262Updated this week
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- seqax = sequence modeling + JAX☆170Updated 6 months ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆475Updated last week
- Parallel framework for training and fine-tuning deep neural networks☆70Updated 3 months ago
- LLM checkpointing for DeepSpeed/Megatron☆24Updated 2 months ago