KuangjuX / AttnLinkLinks
An experimental communicating attention kernel based on DeepEP.
☆34Updated last month
Alternatives and similar repositories for AttnLink
Users that are interested in AttnLink are comparing it to the libraries listed below
Sorting:
- ☆63Updated 4 months ago
- DeeperGEMM: crazy optimized version☆70Updated 4 months ago
- ☆50Updated 3 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆60Updated this week
- ☆38Updated last month
- Debug print operator for cudagraph debugging☆13Updated last year
- Tile-based language built for AI computation across all scales☆51Updated this week
- ☆23Updated last week
- ☆30Updated 2 months ago
- ☆42Updated 4 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆81Updated last week
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆47Updated last week
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆70Updated this week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆97Updated 2 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆109Updated 4 months ago
- A practical way of learning Swizzle☆25Updated 7 months ago
- Quantized Attention on GPU☆44Updated 9 months ago
- ☆102Updated 3 weeks ago
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆268Updated last week
- A lightweight design for computation-communication overlap.☆165Updated this week
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆48Updated last year
- A simple calculation for LLM MFU.☆44Updated this week
- ☆57Updated 3 months ago
- a simple API to use CUPTI☆11Updated 3 weeks ago
- ☆95Updated 3 months ago
- 方便扩展的Cuda算子理解和优化框架,仅用在学习使用☆16Updated last year
- ☆19Updated 11 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆42Updated 3 months ago
- Implement Flash Attention using Cute.☆95Updated 8 months ago
- ☆47Updated 2 weeks ago