gau-nernst / quantized-trainingLinks
Explore training for quantized models
☆26Updated 6 months ago
Alternatives and similar repositories for quantized-training
Users that are interested in quantized-training are comparing it to the libraries listed below
Sorting:
- ☆119Updated last month
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆165Updated 2 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Updated last year
- A bunch of kernels that might make stuff slower 😉☆75Updated this week
- Fast low-bit matmul kernels in Triton☆427Updated last week
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- Official implementation for Training LLMs with MXFP4☆118Updated 9 months ago
- extensible collectives library in triton☆95Updated 10 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆127Updated last year
- ☆85Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆142Updated 8 months ago
- Applied AI experiments and examples for PyTorch☆315Updated 5 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆281Updated 3 months ago
- Collection of kernels written in Triton language☆178Updated 2 weeks ago
- ☆104Updated last year
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆120Updated last year
- ☆160Updated 2 years ago
- ☆163Updated 7 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆96Updated 4 months ago
- ☆61Updated 2 years ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆324Updated last week
- ☆118Updated 8 months ago
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆29Updated 11 months ago
- ☆131Updated 8 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated 2 years ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆244Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 10 months ago
- QuIP quantization☆61Updated last year
- ☆286Updated last week
- Ship correct and fast LLM kernels to PyTorch☆140Updated 3 weeks ago