meta-pytorch / torchftLinks
Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)
☆459Updated 2 weeks ago
Alternatives and similar repositories for torchft
Users that are interested in torchft are comparing it to the libraries listed below
Sorting:
- Scalable and Performant Data Loading☆352Updated last week
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆691Updated this week
- Load compute kernels from the Hub☆352Updated last week
- PyTorch Single Controller☆928Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆271Updated 3 weeks ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆587Updated 4 months ago
- Fast low-bit matmul kernels in Triton☆410Updated this week
- A Quirky Assortment of CuTe Kernels☆701Updated last week
- A library to analyze PyTorch traces.☆449Updated last week
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- ☆263Updated this week
- Applied AI experiments and examples for PyTorch☆311Updated 4 months ago
- LLM KV cache compression made easy☆717Updated last week
- torchcomms: a modern PyTorch communications API☆309Updated this week
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆216Updated last week
- kernels, of the mega variety☆631Updated 2 months ago
- TPU inference for vLLM, with unified JAX and PyTorch support.☆199Updated this week
- Accelerating MoE with IO and Tile-aware Optimizations☆351Updated this week
- Where GPUs get cooked 👩🍳🔥☆339Updated 3 months ago
- Helpful kernel tutorials and examples for tile-based GPU programming☆456Updated this week
- ring-attention experiments☆160Updated last year
- PyTorch-native post-training at scale☆572Updated this week
- Perplexity GPU Kernels☆539Updated last month
- ☆317Updated last year
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆327Updated last month
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆239Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆396Updated 6 months ago
- Cataloging released Triton kernels.☆278Updated 3 months ago
- ☆340Updated 2 weeks ago
- Ship correct and fast LLM kernels to PyTorch☆126Updated this week