facebookexperimental / protoquant
Prototype routines for GPU quantization written using PyTorch.
☆19Updated 3 weeks ago
Alternatives and similar repositories for protoquant:
Users that are interested in protoquant are comparing it to the libraries listed below
- ☆21Updated 2 months ago
- Hacks for PyTorch☆18Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Updated 3 months ago
- Experiment of using Tangent to autodiff triton☆74Updated 11 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆75Updated this week
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated 10 months ago
- ☆57Updated 7 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆57Updated 2 months ago
- A block oriented training approach for inference time optimization.☆32Updated 5 months ago
- Make triton easier☆42Updated 7 months ago
- PyTorch centric eager mode debugger☆43Updated last month
- Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops☆24Updated 10 months ago
- Torch Distributed Experimental☆115Updated 5 months ago
- TORCH_LOGS parser for PT2☆30Updated last week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆37Updated 8 months ago
- extensible collectives library in triton