An experimental communicating attention kernel based on DeepEP.
☆35Jul 29, 2025Updated 7 months ago
Alternatives and similar repositories for AttnLink
Users that are interested in AttnLink are comparing it to the libraries listed below
Sorting:
- ☆65Apr 26, 2025Updated 10 months ago
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 10 months ago
- ☆53Feb 24, 2026Updated last week
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆52May 19, 2025Updated 9 months ago
- ☆22May 5, 2025Updated 10 months ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 6 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆165Feb 11, 2026Updated 3 weeks ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆93Jan 16, 2026Updated last month
- ☆88May 31, 2025Updated 9 months ago
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.☆54Feb 6, 2026Updated last month
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆30Jan 22, 2026Updated last month
- Debug print operator for cudagraph debugging☆14Aug 2, 2024Updated last year
- Tile-based language built for AI computation across all scales☆138Feb 27, 2026Updated last week
- ☆32Jul 2, 2025Updated 8 months ago
- A Top-Down Profiler for GPU Applications☆22Feb 29, 2024Updated 2 years ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆21Aug 3, 2025Updated 7 months ago
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated last year
- Fast and memory-efficient exact kmeans☆140Feb 18, 2026Updated 2 weeks ago
- ☆38Aug 7, 2025Updated 7 months ago
- ☆20Dec 24, 2024Updated last year
- Triton adapter for Ascend. Mirror of https://gitcode.com/ascend/triton-ascend☆113Updated this week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆150May 10, 2025Updated 9 months ago
- study of cutlass☆22Nov 10, 2024Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 5 months ago
- A practical way of learning Swizzle☆37Feb 3, 2025Updated last year
- ☆347Jan 28, 2026Updated last month
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆475Feb 28, 2026Updated last week
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 8 months ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 5 months ago
- Perplexity GPU Kernels☆567Nov 7, 2025Updated 4 months ago
- A lightweight design for computation-communication overlap.☆223Jan 20, 2026Updated last month
- ☆13Jan 7, 2025Updated last year
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆98Dec 17, 2025Updated 2 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- GEMV implementation with CUTLASS☆19Aug 21, 2025Updated 6 months ago
- ☆16Feb 24, 2026Updated last week
- Triton for OpenCL backend, and use mlir-translate to get source OpenCL code☆24Aug 27, 2025Updated 6 months ago
- Kernel Library Wheel for SGLang☆16Updated this week