zhuzilin / flash-attention-with-sinkLinks
☆38Updated last month
Alternatives and similar repositories for flash-attention-with-sink
Users that are interested in flash-attention-with-sink are comparing it to the libraries listed below
Sorting:
- ☆64Updated 4 months ago
- Debug print operator for cudagraph debugging☆13Updated last year
- ☆50Updated 3 months ago
- DeeperGEMM: crazy optimized version☆70Updated 4 months ago
- An experimental communicating attention kernel based on DeepEP.☆34Updated last month
- Quantized Attention on GPU☆44Updated 9 months ago
- DLSlime RDMA Transfer Engine☆48Updated this week
- ☆23Updated last week
- Estimate MFU for DeepSeekV3☆24Updated 8 months ago
- ☆102Updated 3 weeks ago
- ☆50Updated 2 weeks ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆32Updated 9 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆100Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.☆81Updated 2 weeks ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆43Updated 2 months ago
- ☆78Updated 4 months ago
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Updated 11 months ago
- Tile-based language built for AI computation across all scales☆51Updated last week
- 16-fold memory access reduction with nearly no loss☆105Updated 5 months ago
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆59Updated 2 weeks ago
- A practical way of learning Swizzle☆25Updated 7 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆57Updated 5 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆116Updated 3 months ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆47Updated last month
- Utility scripts for PyTorch (e.g. Memory profiler that understands more low-level allocations such as NCCL)☆52Updated this week
- ☆82Updated 7 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆140Updated 3 months ago
- ☆30Updated 2 months ago
- a simple API to use CUPTI☆11Updated 3 weeks ago
- ☆95Updated 3 months ago