Flash Attention in 300-500 lines of CUDA/C++
☆36Aug 22, 2025Updated 6 months ago
Alternatives and similar repositories for flash-attention-minimal
Users that are interested in flash-attention-minimal are comparing it to the libraries listed below
Sorting:
- codes and plots for "Active-Dormant Attention Heads: Mechanistically Demystifying Extreme-Token Phenomena in LLMs"☆10Dec 30, 2024Updated last year
- triton ver of gqa flash attn, based on the tutorial☆12Aug 4, 2024Updated last year
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- ☆44Nov 1, 2025Updated 4 months ago
- Personal solutions to the Triton Puzzles☆20Jul 18, 2024Updated last year
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models☆79Oct 16, 2024Updated last year
- ☆20Oct 11, 2023Updated 2 years ago
- ☆22Dec 1, 2021Updated 4 years ago
- ☆18Dec 12, 2023Updated 2 years ago
- supporting pytorch FSDP for optimizers☆84Dec 8, 2024Updated last year
- u-MPS implementation and experimentation code used in the paper Tensor Networks for Probabilistic Sequence Modeling (https://arxiv.org/ab…☆19Jul 2, 2020Updated 5 years ago
- Flash-Linear-Attention models beyond language☆21Aug 28, 2025Updated 6 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,084Dec 30, 2024Updated last year
- A probabilitic model for contextual word representation. Accepted to ACL2023 Findings.☆25Oct 22, 2023Updated 2 years ago
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆56Dec 4, 2024Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆110Oct 11, 2025Updated 4 months ago
- Effective transpose on Hopper GPU☆28Sep 6, 2025Updated 6 months ago
- ☆29May 4, 2024Updated last year
- Stick-breaking attention☆62Jul 1, 2025Updated 8 months ago
- ☆29Jul 9, 2024Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- ☆33Oct 4, 2024Updated last year
- Experiments on the impact of depth in transformers and SSMs.☆41Oct 23, 2025Updated 4 months ago
- Long Context Extension and Generalization in LLMs☆63Sep 21, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Official Repo for the paper: VCR: Visual Caption Restoration. Check arxiv.org/pdf/2406.06462 for details.☆32Feb 26, 2025Updated last year
- ☆36Feb 26, 2024Updated 2 years ago
- ☆124May 28, 2024Updated last year
- ☆31Jul 2, 2023Updated 2 years ago
- Codebase for fine-tuning Llama2 70B to generate math test questions and answers.☆11Aug 30, 2024Updated last year
- Awesome Triton Resources☆39Apr 27, 2025Updated 10 months ago
- Focused on fast experimentation and simplicity☆80Dec 24, 2024Updated last year
- mHC-lite: You Don’t Need 20 Sinkhorn-Knopp Iterations☆70Jan 12, 2026Updated last month
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆79Aug 12, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆247Sep 12, 2025Updated 5 months ago
- Repo for paper "CODIS: Benchmarking Context-Dependent Visual Comprehension for Multimodal Large Language Models".☆12Oct 14, 2024Updated last year
- Concurrency library☆17Oct 13, 2024Updated last year