KuangjuX / AttnLinkView external linksLinks
An experimental communicating attention kernel based on DeepEP.
☆35Jul 29, 2025Updated 6 months ago
Alternatives and similar repositories for AttnLink
Users that are interested in AttnLink are comparing it to the libraries listed below
Sorting:
- ☆65Apr 26, 2025Updated 9 months ago
- DeeperGEMM: crazy optimized version☆73May 5, 2025Updated 9 months ago
- ☆54May 5, 2025Updated 9 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆52May 19, 2025Updated 8 months ago
- ☆22May 5, 2025Updated 9 months ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 6 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆163Updated this week
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆26Jan 22, 2026Updated 3 weeks ago
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.☆37Feb 6, 2026Updated last week
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆93Jan 16, 2026Updated 3 weeks ago
- ☆88May 31, 2025Updated 8 months ago
- Debug print operator for cudagraph debugging☆14Aug 2, 2024Updated last year
- A Top-Down Profiler for GPU Applications☆22Feb 29, 2024Updated last year
- ☆32Jul 2, 2025Updated 7 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 6 months ago
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated 11 months ago
- Fast and memory-efficient exact kmeans☆138Updated this week
- ☆38Aug 7, 2025Updated 6 months ago
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆108Updated this week
- ☆20Dec 24, 2024Updated last year
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆148May 10, 2025Updated 9 months ago
- study of cutlass☆22Nov 10, 2024Updated last year
- Tile-based language built for AI computation across all scales☆120Feb 8, 2026Updated last week
- Framework to reduce autotune overhead to zero for well known deployments.☆96Sep 19, 2025Updated 4 months ago
- ☆343Jan 28, 2026Updated 2 weeks ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆462Feb 8, 2026Updated last week
- A practical way of learning Swizzle☆36Feb 3, 2025Updated last year
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 4 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 7 months ago
- Perplexity GPU Kernels☆560Nov 7, 2025Updated 3 months ago
- A lightweight design for computation-communication overlap.☆219Jan 20, 2026Updated 3 weeks ago
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆91Dec 17, 2025Updated last month
- ☆13Jan 7, 2025Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Quartet II Official Code☆43Feb 2, 2026Updated last week
- GEMV implementation with CUTLASS☆19Aug 21, 2025Updated 5 months ago
- This repository contains companion software for the Colfax Research paper "Categorical Foundations for CuTe Layouts".☆103Sep 24, 2025Updated 4 months ago
- Triton for OpenCL backend, and use mlir-translate to get source OpenCL code☆24Aug 27, 2025Updated 5 months ago