☆20Dec 24, 2024Updated last year
Alternatives and similar repositories for flash_tree_attn
Users that are interested in flash_tree_attn are comparing it to the libraries listed below
Sorting:
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Sampling☆51Jul 15, 2025Updated 7 months ago
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆74Jul 14, 2025Updated 7 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- ☆13Jan 7, 2025Updated last year
- triton ver of gqa flash attn, based on the tutorial☆12Aug 4, 2024Updated last year
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- Website for CSE 234, Winter 2025☆13Mar 24, 2025Updated 11 months ago
- A selective knowledge distillation algorithm for efficient speculative decoders☆36Nov 27, 2025Updated 3 months ago
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆27Jan 22, 2026Updated last month
- Optimize GEMM with tensorcore step by step☆36Dec 17, 2023Updated 2 years ago
- ☆32May 26, 2024Updated last year
- Awesome Triton Resources☆39Apr 27, 2025Updated 10 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- deepstream + cuda,yolo26,yolo-master,yolo11,yolov8,sam,transformer, etc.☆36Feb 7, 2026Updated 3 weeks ago
- ☆65Apr 26, 2025Updated 10 months ago
- Official Implementation of SAM-Decoding: Speculative Decoding via Suffix Automaton☆40Feb 13, 2025Updated last year
- PyTorch implementation of the Flash Spectral Transform Unit.☆21Sep 19, 2024Updated last year
- Code for Draft Attention☆99May 22, 2025Updated 9 months ago
- ☆22May 5, 2025Updated 9 months ago
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆22Apr 22, 2025Updated 10 months ago
- [ICLR 2025] Official Code Release for Explaining Modern Gated-Linear RNNs via a Unified Implicit Attention Formulation☆49Mar 1, 2025Updated last year
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 6 months ago
- Official Project Page for HLA: Higher-order Linear Attention (https://arxiv.org/abs/2510.27258)☆45Jan 6, 2026Updated last month
- Official repository of "Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions", ICLR 2024 Sp…☆21Mar 7, 2024Updated last year
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆57Nov 20, 2024Updated last year
- ☆45Nov 10, 2023Updated 2 years ago
- ☆44Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆143May 29, 2025Updated 9 months ago
- Sirius, an efficient correction mechanism, which significantly boosts Contextual Sparsity models on reasoning tasks while maintaining its…☆21Sep 10, 2024Updated last year
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)☆46Dec 9, 2023Updated 2 years ago
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- ☆27Mar 24, 2025Updated 11 months ago
- study of cutlass☆22Nov 10, 2024Updated last year
- minimal C implementation of speculative decoding based on llama2.c☆25Jul 15, 2024Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 5 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆105Dec 17, 2025Updated 2 months ago
- Implement Flash Attention using Cute.☆101Dec 17, 2024Updated last year
- Awesome code, projects, books, etc. related to CUDA☆31Feb 3, 2026Updated 3 weeks ago