leloykun / flash-attention-minimalLinks
Flash Attention in 300-500 lines of CUDA/C++
☆36Updated 3 months ago
Alternatives and similar repositories for flash-attention-minimal
Users that are interested in flash-attention-minimal are comparing it to the libraries listed below
Sorting:
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆91Updated 4 months ago
- Fast and memory-efficient exact attention☆74Updated 9 months ago
- The evaluation framework for training-free sparse attention in LLMs☆106Updated last month
- ☆154Updated 9 months ago
- ☆132Updated 6 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆86Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆253Updated 2 months ago
- Sirius, an efficient correction mechanism, which significantly boosts Contextual Sparsity models on reasoning tasks while maintaining its…☆21Updated last year
- ☆20Updated 7 months ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆146Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆212Updated 5 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆73Updated last year
- Stick-breaking attention☆61Updated 5 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆170Updated last year
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆54Updated last year
- continous batching and parallel acceleration for RWKV6☆22Updated last year
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆112Updated 8 months ago
- ☆150Updated 2 years ago
- ☆22Updated last year
- Awesome Triton Resources☆38Updated 7 months ago
- ☆47Updated 6 months ago
- ☆22Updated 8 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆131Updated 6 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- A bunch of kernels that might make stuff slower 😉☆65Updated this week
- ☆121Updated last year
- An innovative method expediting LLMs via streamlined semi-autoregressive generation and draft verification.☆26Updated 7 months ago
- ☆35Updated last year
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆148Updated 9 months ago
- Transformers components but in Triton☆34Updated 6 months ago