Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels
☆113Jul 31, 2023Updated 2 years ago
Alternatives and similar repositories for FlashAttention20
Users that are interested in FlashAttention20 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of FlashAttention in PyTorch☆182Jan 12, 2025Updated last year
- Triton implementation of Flash Attention2.0☆51Jul 31, 2023Updated 2 years ago
- Community Implementation of the paper: "Multi-Head Mixture-of-Experts" In PyTorch☆29Mar 22, 2026Updated last week
- Simple Implementation of TinyGPTV in super simple Zeta lego blocks☆16Nov 11, 2024Updated last year
- Some microbenchmarks and design docs before commencement☆11Feb 1, 2021Updated 5 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- An simple pytorch implementation of Flash MultiHead Attention☆22Feb 5, 2024Updated 2 years ago
- Implement FlashAttention v2 with minimal code to learn.☆15Jun 12, 2024Updated last year
- LongAttn :Selecting Long-context Training Data via Token-level Attention☆15Jul 16, 2025Updated 8 months ago
- ☆13Apr 25, 2025Updated 11 months ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆170Aug 14, 2024Updated last year
- Optimize GEMM with tensorcore step by step☆37Dec 17, 2023Updated 2 years ago
- Fast and memory-efficient exact attention☆22,938Mar 23, 2026Updated last week
- ☆16Mar 13, 2023Updated 3 years ago
- Implementation of Proximal Policy Optimization in Jax+Flax☆21May 18, 2023Updated 2 years ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Nov 11, 2024Updated last year
- Simple PyTorch profiler that combines DeepSpeed Flops Profiler and TorchInfo☆11Feb 12, 2023Updated 3 years ago
- ☆24Feb 8, 2024Updated 2 years ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,098Dec 30, 2024Updated last year
- ☆16Apr 7, 2024Updated last year
- The official implementation of the DAC 2024 paper GQA-LUT☆21Dec 20, 2024Updated last year
- Digital Design Lab Spring 2019 Final Project☆13Jun 17, 2019Updated 6 years ago
- PyTorch Quantization Framework For OCP MX Datatypes.☆16May 30, 2025Updated 10 months ago
- NES emulator written in pure FreeBASIC with love by Blyss Sarania and Gavin Schulte(Nobbs66).☆21Oct 29, 2025Updated 5 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [NeurIPS 2025] Multipole Attention for Efficient Long Context Reasoning☆22Dec 5, 2025Updated 3 months ago
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆51Aug 25, 2024Updated last year
- flash attention tutorial written in python, triton, cuda, cutlass☆494Jan 20, 2026Updated 2 months ago
- Implementation of the premier Text to Video model from OpenAI☆56Nov 11, 2024Updated last year
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- My defense presentation☆10Mar 7, 2022Updated 4 years ago
- CUDA SGEMM optimization note☆15Oct 31, 2023Updated 2 years ago
- ☆14Feb 18, 2024Updated 2 years ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆124Mar 22, 2026Updated last week
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ☆18Nov 10, 2024Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 9 months ago
- ☆17Dec 9, 2022Updated 3 years ago
- The open source implementation of "Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers"☆19Mar 11, 2024Updated 2 years ago
- [CVPR 2025] Efficient Personalization of Quantized Diffusion Model without Backpropagation☆15Mar 31, 2025Updated 11 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,692Updated this week
- An artificial matrix generator in C☆12Feb 16, 2023Updated 3 years ago