ezyang / torchdbgLinks
PyTorch centric eager mode debugger
☆47Updated 8 months ago
Alternatives and similar repositories for torchdbg
Users that are interested in torchdbg are comparing it to the libraries listed below
Sorting:
- Experiment of using Tangent to autodiff triton☆80Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆59Updated last week
- ☆21Updated 5 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- Make triton easier☆47Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆158Updated 2 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated 2 months ago
- Memory Optimizations for Deep Learning (ICML 2023)☆102Updated last year
- ☆87Updated last year
- Context Manager to profile the forward and backward times of PyTorch's nn.Module☆83Updated last year
- ☆118Updated last year
- A library for unit scaling in PyTorch☆129Updated last month
- Torch Distributed Experimental☆117Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- TORCH_LOGS parser for PT2☆55Updated this week
- Utilities for Training Very Large Models☆58Updated 10 months ago
- TorchFix - a linter for PyTorch-using code with autofix support☆144Updated 6 months ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- Hacks for PyTorch☆19Updated 2 years ago
- A bunch of kernels that might make stuff slower 😉☆58Updated this week
- extensible collectives library in triton☆88Updated 4 months ago
- A block oriented training approach for inference time optimization.☆33Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Updated 10 months ago
- Prototype routines for GPU quantization written using PyTorch.☆21Updated 2 weeks ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆260Updated this week
- ☆45Updated last year
- Various transformers for FSDP research☆38Updated 2 years ago
- ring-attention experiments☆149Updated 10 months ago
- JAX bindings for Flash Attention v2☆90Updated 3 weeks ago