gau-nernst / quantized-trainingLinks
Explore training for quantized models
☆24Updated 2 months ago
Alternatives and similar repositories for quantized-training
Users that are interested in quantized-training are comparing it to the libraries listed below
Sorting:
- Fast low-bit matmul kernels in Triton☆365Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆92Updated last week
- extensible collectives library in triton☆87Updated 5 months ago
- Applied AI experiments and examples for PyTorch☆294Updated 3 weeks ago
- A bunch of kernels that might make stuff slower 😉☆59Updated this week
- ☆94Updated 3 weeks ago
- Collection of kernels written in Triton language☆154Updated 5 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆82Updated last year
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- Cataloging released Triton kernels.☆257Updated last week
- ☆88Updated 10 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆233Updated 2 weeks ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆114Updated last year
- ☆234Updated last week
- ☆82Updated 7 months ago
- a minimal cache manager for PagedAttention, on top of llama3.☆120Updated last year
- Official implementation for Training LLMs with MXFP4☆89Updated 4 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆221Updated 4 months ago
- ☆142Updated 7 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆111Updated 11 months ago
- ☆111Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆239Updated 3 weeks ago
- ☆159Updated 2 years ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆265Updated 2 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆223Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆116Updated 3 months ago
- ring-attention experiments☆152Updated 11 months ago
- Triton-based Symmetric Memory operators and examples☆28Updated 3 weeks ago
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆20Updated 7 months ago