facebookexperimental / protoquant
Prototype routines for GPU quantization written using PyTorch.
☆19Updated this week
Alternatives and similar repositories for protoquant:
Users that are interested in protoquant are comparing it to the libraries listed below
- ☆21Updated 3 months ago
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated 11 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆86Updated this week
- ☆59Updated last week
- Experiment of using Tangent to autodiff triton☆75Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Make triton easier☆43Updated 8 months ago
- extensible collectives library in triton☆82Updated 4 months ago
- Implementation of Hyena Hierarchy in JAX☆10Updated last year
- PyTorch centric eager mode debugger☆44Updated 2 months ago
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- A block oriented training approach for inference time optimization.☆32Updated 5 months ago
- ☆88Updated 8 months ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆38Updated 9 months ago
- ☆38Updated 2 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆44Updated this week
- Tutorial on how to convert machine learned models into ONNX☆16Updated last year
- Implementation of a Light Recurrent Unit in Pytorch☆48Updated 4 months ago
- Torch Distributed Experimental☆115Updated 6 months ago
- python package of rocm-smi-lib☆20Updated 4 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆61Updated 2 weeks ago
- FlexAttention w/ FlashAttention3 Support☆26Updated 4 months ago
- Hacks for PyTorch☆18Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆43Updated 6 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆67Updated 8 months ago
- ☆67Updated 3 months ago
- ☆51Updated 6 months ago
- This repository contains the experimental PyTorch native float8 training UX☆221Updated 6 months ago
- ☆45Updated last year