DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling
☆22Apr 28, 2026Updated this week
Alternatives and similar repositories for DeepGEMM
Users that are interested in DeepGEMM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 8 months ago
- ☆52May 19, 2025Updated 11 months ago
- ☆23Aug 14, 2024Updated last year
- ☆66Apr 26, 2025Updated last year
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- SGLang Kernel Wheel Index☆22Apr 21, 2026Updated last week
- Triton-based Symmetric Memory operators and examples☆98Mar 28, 2026Updated last month
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Nov 18, 2024Updated last year
- a simple API to use CUPTI☆10Aug 19, 2025Updated 8 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆52Jul 4, 2025Updated 9 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆108Jun 28, 2025Updated 10 months ago
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆13Nov 23, 2024Updated last year
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆30Jan 22, 2026Updated 3 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- [WIP] Better (FP8) attention for Hopper☆33Feb 24, 2025Updated last year
- ☕️ A vscode extension for netron, support *.pdmodel, *.nb, *.onnx, *.pb, *.h5, *.tflite, *.pth, *.pt, *.mnn, *.param, etc.☆14Jun 4, 2023Updated 2 years ago
- ☆65Feb 15, 2026Updated 2 months ago
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆26Apr 8, 2026Updated 3 weeks ago
- Triton to TVM transpiler.☆23Oct 14, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- ☆38Aug 7, 2025Updated 8 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆180Feb 11, 2026Updated 2 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- NVIDIA cuTile learn☆167Dec 9, 2025Updated 4 months ago
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago
- Writing a CUDA software ray tracing renderer with Analysis-Driven Optimization from scratch: a python-importable, distributed parallel re…☆37Apr 12, 2026Updated 2 weeks ago
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- ☆20Sep 28, 2024Updated last year
- Custom PTX Instruction Benchmark☆139Feb 27, 2025Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆46Feb 27, 2025Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆193Jan 28, 2025Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- The official implementation for the intra-stage fusion technique introduced in https://arxiv.org/abs/2409.13221☆31Apr 22, 2025Updated last year
- a size profiler for cuda binary☆70Jan 15, 2026Updated 3 months ago
- ☆13Jun 18, 2024Updated last year
- ☆26Feb 17, 2025Updated last year
- ☆57Feb 24, 2026Updated 2 months ago
- Dynamic resources changes for multi-dimensional parallelism training☆31Aug 22, 2025Updated 8 months ago
- [Archived] For the latest updates and community contribution, please visit: https://github.com/Ascend/TransferQueue or https://gitcode.co…☆15Jan 16, 2026Updated 3 months ago