IntelLabs / FP8-Emulation-Toolkit
PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.
☆100Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for FP8-Emulation-Toolkit
- ☆123Updated last year
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆164Updated 2 months ago
- ☆40Updated 7 months ago
- ☆132Updated 4 months ago
- This repository contains integer operators on GPUs for PyTorch.☆184Updated last year
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆90Updated 4 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆131Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆85Updated 8 months ago
- ☆80Updated last year
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆104Updated 2 years ago
- ☆80Updated 7 months ago
- ☆156Updated last year
- llama INT4 cuda inference with AWQ☆48Updated 4 months ago
- play gemm with tvm☆84Updated last year
- DietCode Code Release☆61Updated 2 years ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆43Updated last year
- ☆24Updated 7 months ago
- ☆45Updated 2 weeks ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆89Updated last month
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆49Updated 4 months ago
- ☆169Updated 4 months ago
- ☆48Updated this week
- Code Repository of Evaluating Quantized Large Language Models☆103Updated 2 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆98Updated 2 months ago
- Official PyTorch implementation of FlatQuant: Flatness Matters for LLM Quantization☆66Updated last week
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆46Updated 2 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆81Updated 2 years ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆211Updated 3 weeks ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆111Updated 6 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆85Updated 6 months ago