☆19Dec 24, 2024Updated last year
Alternatives and similar repositories for flash_tree_attn
Users that are interested in flash_tree_attn are comparing it to the libraries listed below
Sorting:
- triton ver of gqa flash attn, based on the tutorial☆12Aug 4, 2024Updated last year
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆76Jul 14, 2025Updated 8 months ago
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Sampling☆54Jul 15, 2025Updated 8 months ago
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆22Apr 22, 2025Updated 10 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆45Jun 11, 2025Updated 9 months ago
- An implementation of the Prism layer (https://arxiv.org/abs/2011.04823)☆12Nov 13, 2020Updated 5 years ago
- Official Implementation of SAM-Decoding: Speculative Decoding via Suffix Automaton☆44Feb 13, 2025Updated last year
- A selective knowledge distillation algorithm for efficient speculative decoders☆36Nov 27, 2025Updated 3 months ago
- Website for CSE 234, Winter 2025☆13Mar 24, 2025Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆146May 29, 2025Updated 9 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- An Attention Superoptimizer☆22Jan 20, 2025Updated last year
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆29Jan 22, 2026Updated last month
- ☆32May 26, 2024Updated last year
- Code for Draft Attention☆100May 22, 2025Updated 9 months ago
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated last year
- ☆13Jan 7, 2025Updated last year
- DuoDecoding: Hardware-aware Heterogeneous Speculative Decoding with Dynamic Multi-Sequence Drafting☆17Mar 4, 2025Updated last year
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)☆46Dec 9, 2023Updated 2 years ago
- Awesome Triton Resources☆39Apr 27, 2025Updated 10 months ago
- paNote: an graph note software can be deployed as blog or use as electron☆12Jun 15, 2024Updated last year
- official code for GliDe with a CaPE☆20Aug 13, 2024Updated last year
- ☆22May 5, 2025Updated 10 months ago
- [AACL 2023] Official implementation of paper "Towards LLM-based Fact Verification on News Claims with a Hierarchical Step-by-Step Prompti…☆21Apr 1, 2024Updated last year
- Official repository of "Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions", ICLR 2024 Sp…☆21Mar 7, 2024Updated 2 years ago
- minimal C implementation of speculative decoding based on llama2.c☆27Jul 15, 2024Updated last year
- [ASPLOS'26] Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafter☆156Feb 27, 2026Updated 3 weeks ago
- ☆29Mar 24, 2025Updated 11 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆19Aug 3, 2025Updated 7 months ago
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆26Feb 11, 2025Updated last year
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆93Sep 11, 2025Updated 6 months ago
- PyTorch implementation of the Flash Spectral Transform Unit.☆22Sep 19, 2024Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆144Dec 4, 2024Updated last year
- Official Project Page for HLA: Higher-order Linear Attention (https://arxiv.org/abs/2510.27258)☆45Jan 6, 2026Updated 2 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆149May 10, 2025Updated 10 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆105Mar 5, 2026Updated 2 weeks ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 6 months ago
- ☆46Nov 10, 2023Updated 2 years ago