☆242Jan 2, 2025Updated last year
Alternatives and similar repositories for triton-flash-attention
Users that are interested in triton-flash-attention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆15Feb 23, 2025Updated last year
- Distributed training (multi-node) of a Transformer model☆94Apr 10, 2024Updated last year
- Material for gpu-mode lectures☆5,897Feb 1, 2026Updated last month
- ☆46May 24, 2025Updated 10 months ago
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆60Oct 27, 2025Updated 5 months ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- 100 days of building GPU kernels!☆581Apr 27, 2025Updated 11 months ago
- Coding a Multimodal (Vision) Language Model from scratch in PyTorch with full explanation: https://www.youtube.com/watch?v=vAmKB7iPkWw☆598Dec 6, 2024Updated last year
- flash attention tutorial written in python, triton, cuda, cutlass☆494Jan 20, 2026Updated 2 months ago
- Puzzles for learning Triton☆2,348Mar 18, 2026Updated last week
- Transformers components but in Triton☆34May 9, 2025Updated 10 months ago
- ML algorithms implementations that are good for learning the underlying principles☆27Dec 7, 2024Updated last year
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆59Aug 12, 2024Updated last year
- DiTAS: Quantizing Diffusion Transformers via Enhanced Activation Smoothing (WACV 2025)☆13Feb 7, 2026Updated last month
- making the official triton tutorials actually comprehensible☆140Aug 25, 2025Updated 7 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Optimize GEMM with tensorcore step by step☆37Dec 17, 2023Updated 2 years ago
- Code repository for ICLR 2025 paper "LeanQuant: Accurate and Scalable Large Language Model Quantization with Loss-error-aware Grid"☆26Mar 2, 2025Updated last year
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,098Dec 30, 2024Updated last year
- Stable Diffusion implemented from scratch in PyTorch☆1,040Oct 22, 2024Updated last year
- Notes on quantization in neural networks☆121Dec 14, 2023Updated 2 years ago
- LLaMA 2 implemented from scratch in PyTorch☆367Sep 25, 2023Updated 2 years ago
- Efficient Triton Kernels for LLM Training☆6,242Updated this week
- GPU Kernels☆223Apr 27, 2025Updated 11 months ago
- Implement Flash Attention using Cute.☆103Dec 17, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Notes on Direct Preference Optimization☆24Apr 14, 2024Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆255May 6, 2025Updated 10 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆2,119Aug 26, 2025Updated 7 months ago
- Penn CIS 5650 (GPU Programming and Architecture) Final Project☆44Dec 11, 2023Updated 2 years ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆467Mar 10, 2025Updated last year
- ☆119May 16, 2025Updated 10 months ago
- Cute layout visualization☆33Jan 18, 2026Updated 2 months ago
- ☆16May 14, 2025Updated 10 months ago
- Notes about LLaMA 2 model☆73Aug 30, 2023Updated 2 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- 𝐈𝐬𝐥𝐚𝐦𝐢𝐜𝐓𝐫𝐚𝐧𝐬𝐥𝐚𝐭𝐨𝐫 is an automated solution designed to translate 𝐇𝐚𝐝𝐢𝐭𝐡𝐬 into multiple languages using the power …☆11Jan 17, 2025Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,692Updated this week
- Community Implementation of the paper: "Multi-Head Mixture-of-Experts" In PyTorch☆29Mar 22, 2026Updated last week
- triton ver of gqa flash attn, based on the tutorial☆12Aug 4, 2024Updated last year
- From Minimal GEMM to Everything☆189Feb 10, 2026Updated last month
- ☆18Jul 5, 2024Updated last year
- This project is about convolution operator optimization on GPU, include GEMM based (Implicit GEMM) convolution.☆43Sep 29, 2025Updated 6 months ago