gau-nernst / quantized-training
Explore training for quantized models
☆18Updated 4 months ago
Alternatives and similar repositories for quantized-training
Users that are interested in quantized-training are comparing it to the libraries listed below
Sorting:
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 10 months ago
- extensible collectives library in triton☆86Updated last month
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆84Updated last week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆73Updated 8 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆40Updated last month
- ☆79Updated 6 months ago
- A bunch of kernels that might make stuff slower 😉☆40Updated this week
- ☆32Updated this week
- ☆70Updated 3 months ago
- Ahead of Time (AOT) Triton Math Library☆63Updated this week
- Fast low-bit matmul kernels in Triton☆299Updated this week
- Fast Hadamard transform in CUDA, with a PyTorch interface☆187Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆89Updated 2 weeks ago
- llama INT4 cuda inference with AWQ☆54Updated 3 months ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆132Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.☆70Updated last week
- Benchmark code for the "Online normalizer calculation for softmax" paper☆91Updated 6 years ago
- Effective transpose on Hopper GPU☆18Updated 2 weeks ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 5 months ago
- High-Performance SGEMM on CUDA devices☆91Updated 3 months ago
- DeeperGEMM: crazy optimized version☆69Updated last week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆124Updated this week
- Standalone Flash Attention v2 kernel without libtorch dependency☆108Updated 8 months ago
- ☆27Updated 4 months ago
- a minimal cache manager for PagedAttention, on top of llama3.☆87Updated 8 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆106Updated 7 months ago
- ☆104Updated 8 months ago
- ☆50Updated last year
- ☆32Updated last week
- ☆58Updated 3 weeks ago