KuangjuX / AttnLinkLinks
An experimental communicating attention kernel based on DeepEP.
☆34Updated 4 months ago
Alternatives and similar repositories for AttnLink
Users that are interested in AttnLink are comparing it to the libraries listed below
Sorting:
- ☆65Updated 7 months ago
- DeeperGEMM: crazy optimized version☆73Updated 6 months ago
- ☆51Updated 6 months ago
- ☆31Updated 5 months ago
- ☆51Updated 6 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆144Updated 2 months ago
- Tile-based language built for AI computation across all scales☆82Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.☆88Updated 2 months ago
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆32Updated 2 weeks ago
- Debug print operator for cudagraph debugging☆14Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆102Updated 5 months ago
- ☆60Updated last week
- ☆39Updated 3 months ago
- ☆13Updated 10 months ago
- Quantized Attention on GPU☆44Updated last year
- ☆125Updated 3 months ago
- A simple calculation for LLM MFU.☆50Updated 2 months ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆70Updated last week
- ☆113Updated 6 months ago
- Efficient Compute-Communication Overlap for Distributed LLM Inference☆63Updated last month
- Nex Venus Communication Library☆59Updated 2 weeks ago
- A Triton JIT runtime and ffi provider in C++☆29Updated last month
- ☆34Updated last month
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆52Updated last year
- A practical way of learning Swizzle☆33Updated 10 months ago
- ☆19Updated last year
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆86Updated this week
- ☆65Updated 6 months ago
- Implement Flash Attention using Cute.☆97Updated 11 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆45Updated 5 months ago