gau-nernst / quantized-trainingLinks
Explore training for quantized models
☆26Updated 6 months ago
Alternatives and similar repositories for quantized-training
Users that are interested in quantized-training are comparing it to the libraries listed below
Sorting:
- Fast low-bit matmul kernels in Triton☆427Updated last week
- ☆118Updated last month
- A bunch of kernels that might make stuff slower 😉☆75Updated this week
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆165Updated 2 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- Applied AI experiments and examples for PyTorch☆315Updated 5 months ago
- extensible collectives library in triton☆95Updated 10 months ago
- ☆61Updated 2 years ago
- Official implementation for Training LLMs with MXFP4☆118Updated 9 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆219Updated last week
- ☆85Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆142Updated 8 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆238Updated this week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆127Updated last year
- ☆286Updated last week
- ☆104Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆324Updated this week
- ☆160Updated 2 years ago
- ☆131Updated 8 months ago
- Cataloging released Triton kernels.☆292Updated 5 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆96Updated 4 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆112Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆281Updated 3 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆233Updated 7 months ago
- Collection of kernels written in Triton language☆178Updated 2 weeks ago
- ring-attention experiments☆165Updated last year
- ☆118Updated 8 months ago
- ☆71Updated 10 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆120Updated last year