microsoft / microxcalingLinks
PyTorch emulation library for Microscaling (MX)-compatible data formats
☆327Updated 6 months ago
Alternatives and similar repositories for microxcaling
Users that are interested in microxcaling are comparing it to the libraries listed below
Sorting:
- ☆168Updated 2 years ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆732Updated 4 months ago
- This repository contains integer operators on GPUs for PyTorch.☆223Updated 2 years ago
- ☆164Updated last year
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆56Updated 2 years ago
- ☆82Updated last year
- ☆254Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆267Updated 2 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Updated 5 months ago
- ☆110Updated last year
- Shared Middle-Layer for Triton Compilation☆321Updated 3 weeks ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆469Updated last year
- ☆113Updated 2 years ago
- ☆243Updated 3 years ago
- A Easy-to-understand TensorOp Matmul Tutorial☆397Updated 2 months ago
- ☆214Updated 2 months ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆509Updated last year
- ☆165Updated 7 months ago
- CUDA Matrix Multiplication Optimization☆247Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆123Updated last year
- Development repository for the Triton-Linalg conversion☆209Updated 10 months ago
- Automatic Schedule Exploration and Optimization Framework for Tensor Computations☆180Updated 3 years ago
- Assembler for NVIDIA Volta and Turing GPUs☆235Updated 3 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆141Updated 2 years ago
- collection of benchmarks to measure basic GPU capabilities☆476Updated 2 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆331Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆230Updated 2 years ago
- A Winograd Minimal Filter Implementation in CUDA☆28Updated 4 years ago
- Fast low-bit matmul kernels in Triton☆410Updated last week
- ☆152Updated 11 months ago