microsoft / microxcaling
PyTorch emulation library for Microscaling (MX)-compatible data formats
☆197Updated 4 months ago
Alternatives and similar repositories for microxcaling:
Users that are interested in microxcaling are comparing it to the libraries listed below
- ☆134Updated last year
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆105Updated 2 months ago
- This repository contains integer operators on GPUs for PyTorch.☆191Updated last year
- ☆89Updated last year
- ☆180Updated 7 months ago
- ☆87Updated 9 months ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆48Updated last year
- Experimental projects related to TensorRT☆89Updated this week
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆134Updated last year
- Shared Middle-Layer for Triton Compilation☆224Updated this week
- Automatic Schedule Exploration and Optimization Framework for Tensor Computations☆176Updated 2 years ago
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆107Updated 2 years ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆234Updated 3 months ago
- CUDA Matrix Multiplication Optimization☆159Updated 6 months ago
- PyTorch-Based Fast and Efficient Processing for Various Machine Learning Applications with Diverse Sparsity☆102Updated last week
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆85Updated 2 years ago
- A Easy-to-understand TensorOp Matmul Tutorial☆316Updated 4 months ago
- ☆134Updated 6 months ago
- Timeloop performs modeling, mapping and code-generation for tensor algebra workloads on various accelerator architectures.☆364Updated this week
- ☆220Updated 2 years ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆141Updated 8 months ago
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆19Updated last year
- A Winograd Minimal Filter Implementation in CUDA☆24Updated 3 years ago
- Assembler for NVIDIA Volta and Turing GPUs☆211Updated 3 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆97Updated 7 months ago
- ☆50Updated 10 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆519Updated this week
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆342Updated 5 months ago
- [MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration☆197Updated 2 years ago
- Dissecting NVIDIA GPU Architecture☆88Updated 2 years ago