Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels
☆114Jul 31, 2023Updated 2 years ago
Alternatives and similar repositories for FlashAttention20
Users that are interested in FlashAttention20 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of FlashAttention (FA1-FA4) in PyTorch for educational and algorithmic clarity☆209Apr 12, 2026Updated 3 weeks ago
- Triton implementation of Flash Attention2.0☆54Jul 31, 2023Updated 2 years ago
- Community Implementation of the paper: "Multi-Head Mixture-of-Experts" In PyTorch☆30Apr 13, 2026Updated 3 weeks ago
- Simple Implementation of TinyGPTV in super simple Zeta lego blocks☆16Nov 11, 2024Updated last year
- Some microbenchmarks and design docs before commencement☆11Feb 1, 2021Updated 5 years ago
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta☆16Nov 11, 2024Updated last year
- An simple pytorch implementation of Flash MultiHead Attention☆22Feb 5, 2024Updated 2 years ago
- Implement FlashAttention v2 with minimal code to learn.☆16Jun 12, 2024Updated last year
- LongAttn :Selecting Long-context Training Data via Token-level Attention☆15Jul 16, 2025Updated 9 months ago
- ☆13Mar 30, 2026Updated last month
- Triton implementation of FlashAttention2 that adds Custom Masks.☆175Aug 14, 2024Updated last year
- Optimize GEMM with tensorcore step by step☆37Dec 17, 2023Updated 2 years ago
- Fast and memory-efficient exact attention☆23,628May 3, 2026Updated last week
- ☆16Mar 13, 2023Updated 3 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆40Dec 14, 2025Updated 4 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Nov 11, 2024Updated last year
- Simple PyTorch profiler that combines DeepSpeed Flops Profiler and TorchInfo☆11Feb 12, 2023Updated 3 years ago
- ☆16Sep 28, 2022Updated 3 years ago
- ☆24Feb 8, 2024Updated 2 years ago
- ☆31Feb 22, 2024Updated 2 years ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,128Dec 30, 2024Updated last year
- Batch document loader into Quivr (https://github.com/StanGirard/quivr)☆14Jun 25, 2023Updated 2 years ago
- ☆16Apr 7, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- The official implementation of the DAC 2024 paper GQA-LUT☆22Dec 20, 2024Updated last year
- PyTorch Quantization Framework For OCP MX Datatypes.☆16May 30, 2025Updated 11 months ago
- ☆14Dec 20, 2024Updated last year
- NES emulator written in pure FreeBASIC with love by Blyss Sarania and Gavin Schulte(Nobbs66).☆21Oct 29, 2025Updated 6 months ago
- Triton Implementation of Flash Attention with Bias.☆24Apr 16, 2025Updated last year
- Generate High Quality textual or multi-modal datasets with Agents☆18Jun 7, 2023Updated 2 years ago
- [NeurIPS 2025] Multipole Attention for Efficient Long Context Reasoning☆23Dec 5, 2025Updated 5 months ago
- A rich user interface for generating images using Stable Diffusion☆12Mar 16, 2026Updated last month
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- flash attention tutorial written in python, triton, cuda, cutlass☆508Jan 20, 2026Updated 3 months ago
- Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.☆10Sep 24, 2023Updated 2 years ago
- ☆325May 1, 2026Updated last week
- ☆14Feb 18, 2024Updated 2 years ago
- CUDA SGEMM optimization note☆15Oct 31, 2023Updated 2 years ago
- ☆32Mar 26, 2025Updated last year
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆127Apr 13, 2026Updated 3 weeks ago