microsoft / microxcalingLinks
PyTorch emulation library for Microscaling (MX)-compatible data formats
☆331Updated 6 months ago
Alternatives and similar repositories for microxcaling
Users that are interested in microxcaling are comparing it to the libraries listed below
Sorting:
- ☆168Updated 2 years ago
- This repository contains integer operators on GPUs for PyTorch.☆236Updated 2 years ago
- ☆83Updated last year
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆741Updated 5 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Updated 5 months ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆56Updated 2 years ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆271Updated 2 months ago
- ☆164Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆403Updated this week
- ☆255Updated last year
- ☆110Updated last year
- ☆113Updated 2 years ago
- Shared Middle-Layer for Triton Compilation☆321Updated last month
- ☆217Updated 2 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆123Updated last year
- Github mirror of trition-lang/triton repo.☆119Updated this week
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆332Updated last year
- ☆243Updated 3 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆141Updated 2 years ago
- Automatic Schedule Exploration and Optimization Framework for Tensor Computations☆183Updated 3 years ago
- CUDA Matrix Multiplication Optimization☆249Updated last year
- Development repository for the Triton-Linalg conversion☆212Updated 11 months ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆470Updated last year
- ☆165Updated 8 months ago
- llama INT4 cuda inference with AWQ☆55Updated 11 months ago
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆120Updated 3 years ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆231Updated 2 years ago
- Fast low-bit matmul kernels in Triton☆418Updated 3 weeks ago
- Code Repository of Evaluating Quantized Large Language Models☆136Updated last year
- ☆153Updated last year