mobiusml / gemliteLinks
Fast low-bit matmul kernels in Triton
☆381Updated 3 weeks ago
Alternatives and similar repositories for gemlite
Users that are interested in gemlite are comparing it to the libraries listed below
Sorting:
- Applied AI experiments and examples for PyTorch☆299Updated 2 months ago
- Cataloging released Triton kernels.☆263Updated last month
- ☆240Updated this week
- Collection of kernels written in Triton language☆157Updated 6 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆264Updated this week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆233Updated 5 months ago
- A Quirky Assortment of CuTe Kernels☆627Updated last week
- Fastest kernels written from scratch☆374Updated last month
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆215Updated last week
- This repository contains the experimental PyTorch native float8 training UX☆223Updated last year
- extensible collectives library in triton☆89Updated 6 months ago
- a minimal cache manager for PagedAttention, on top of llama3.☆124Updated last year
- kernels, of the mega variety☆586Updated 3 weeks ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆265Updated 3 months ago
- ☆240Updated last year
- ring-attention experiments☆154Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆248Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆124Updated 4 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆320Updated last year
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆119Updated 3 weeks ago
- Perplexity GPU Kernels☆497Updated last month
- ☆92Updated 11 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆698Updated 2 months ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆389Updated last week
- ☆141Updated 9 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆116Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆246Updated 3 weeks ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆180Updated this week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆766Updated 7 months ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆433Updated 10 months ago