High Performance FP8 GEMM Kernels for SM89 and later GPUs.
☆20Jan 24, 2025Updated last year
Alternatives and similar repositories for gemm-fp8
Users that are interested in gemm-fp8 are comparing it to the libraries listed below
Sorting:
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 7 months ago
- High Performance Int8 GEMM Kernels for SM80 and later GPUs.☆19Mar 11, 2025Updated last year
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 5 months ago
- This is a repository of Binary General Matrix Multiply (BGEMM) by customized CUDA kernel. Thank FP6-LLM for the wheels!☆18Aug 30, 2024Updated last year
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆56Mar 4, 2024Updated 2 years ago
- Simple and efficient memory pool is implemented with C++11.☆10Jun 2, 2022Updated 3 years ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆81Aug 12, 2024Updated last year
- This project is the official implementation of our accepted IEEE TPAMI paper Diverse Sample Generation: Pushing the Limit of Data-free Qu…☆15Feb 26, 2023Updated 3 years ago
- ☆14Feb 5, 2025Updated last year
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆171Nov 11, 2025Updated 4 months ago
- ☆33Feb 3, 2025Updated last year
- SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs (ICML 2025)☆35Nov 28, 2025Updated 3 months ago
- A Triton JIT runtime and ffi provider in C++☆32Updated this week
- Elixir: Train a Large Language Model on a Small GPU Cluster☆15Jun 8, 2023Updated 2 years ago
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆211Nov 25, 2025Updated 3 months ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Dec 6, 2023Updated 2 years ago
- Implementation of Sketch Your Own GAN in Jittor☆10Jan 2, 2022Updated 4 years ago
- ☆17Apr 9, 2025Updated 11 months ago
- ☆25Oct 31, 2024Updated last year
- The official implementation of the ICML 2023 paper OFQ-ViT☆39Oct 3, 2023Updated 2 years ago
- 📚 A curated list of awesome matrix-matrix multiplication (A * B = C) frameworks, libraries and software☆64Feb 23, 2025Updated last year
- ☆18Oct 19, 2021Updated 4 years ago
- ☆18Feb 28, 2023Updated 3 years ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆62Mar 25, 2025Updated 11 months ago
- ☆11Jan 10, 2025Updated last year
- The official implementation of the DAC 2024 paper GQA-LUT☆21Dec 20, 2024Updated last year
- [NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections☆21Oct 15, 2024Updated last year
- A practical way of learning Swizzle☆37Feb 3, 2025Updated last year
- BitSplit Post-trining Quantization☆50Dec 20, 2021Updated 4 years ago
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Jul 21, 2023Updated 2 years ago
- Code implementation of GPTAQ (https://arxiv.org/abs/2504.02692)☆88Jul 28, 2025Updated 7 months ago
- Find, list, and inspect processes from Go (golang).☆10Feb 4, 2018Updated 8 years ago
- Model Quantization Benchmark☆18Sep 30, 2025Updated 5 months ago
- Official Repo For AAAI 2026 Accepted Paper "Rethinking the Spatio-Temporal Alignment of End-to-End 3D Perception"☆29Jan 13, 2026Updated 2 months ago
- PyTorch Implementation of GPT-2☆31Sep 4, 2024Updated last year
- Mixed precision training from scratch with Tensors and CUDA☆28May 14, 2024Updated last year
- Implement Flash Attention using Cute.☆102Dec 17, 2024Updated last year
- ☆19Dec 7, 2020Updated 5 years ago
- Entropy-Driven GRPO with Guided Error Correction for Advantage Diversity☆22Aug 28, 2025Updated 6 months ago