gau-nernst / quantized-trainingLinks
Explore training for quantized models
☆25Updated 4 months ago
Alternatives and similar repositories for quantized-training
Users that are interested in quantized-training are comparing it to the libraries listed below
Sorting:
- Boosting 4-bit inference kernels with 2:4 Sparsity☆85Updated last year
- Fast low-bit matmul kernels in Triton☆398Updated this week
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆134Updated last week
- extensible collectives library in triton☆91Updated 7 months ago
- Applied AI experiments and examples for PyTorch☆305Updated 3 months ago
- This repository contains the experimental PyTorch native float8 training UX☆225Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆257Updated last month
- ☆83Updated 10 months ago
- ☆107Updated this week
- A bunch of kernels that might make stuff slower 😉☆64Updated this week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆122Updated last year
- Cataloging released Triton kernels.☆267Updated 2 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- ☆113Updated last year
- ☆93Updated last year
- Collection of kernels written in Triton language☆167Updated 7 months ago
- Triton-based Symmetric Memory operators and examples☆63Updated last month
- ☆250Updated this week
- ☆71Updated 7 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆216Updated last week
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆111Updated last year
- Official implementation for Training LLMs with MXFP4☆109Updated 6 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆272Updated 4 months ago
- ☆158Updated 2 years ago
- PyTorch bindings for CUTLASS grouped GEMM.☆130Updated 5 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆248Updated last month
- ring-attention experiments☆155Updated last year
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆29Updated 9 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆286Updated this week
- How to ensure correctness and ship LLM generated kernels in PyTorch☆121Updated last week