sonnyli / flash_attention_from_scratchLinks
Flash Attention from Scratch on CUDA Ampere
☆37Updated 2 months ago
Alternatives and similar repositories for flash_attention_from_scratch
Users that are interested in flash_attention_from_scratch are comparing it to the libraries listed below
Sorting:
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆49Updated this week
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆76Updated this week
- Open ABI and FFI for Machine Learning Systems☆174Updated this week
- Codes & examples for "CUDA - From Correctness to Performance"☆117Updated last year
- ☆64Updated 5 months ago
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆64Updated 3 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆143Updated 2 months ago
- Tile-based language built for AI computation across all scales☆80Updated last week
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆82Updated this week
- A lightweight design for computation-communication overlap.☆187Updated last month
- PerFlow-AI is a programmable performance analysis, modeling, prediction tool for AI system.☆24Updated this week
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆49Updated last month
- ☆273Updated 3 weeks ago
- A PyTorch-like deep learning framework. Just for fun.☆156Updated 2 years ago
- ☆90Updated 7 months ago
- Canvas: End-to-End Kernel Architecture Search in Neural Networks☆26Updated last year
- ☆79Updated 3 years ago
- Low overhead tracing library and trace visualizer for pipelined CUDA kernels☆108Updated last week
- High performance Transformer implementation in C++.☆142Updated 10 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆186Updated 9 months ago
- DeeperGEMM: crazy optimized version☆73Updated 6 months ago
- a simple API to use CUPTI☆11Updated 3 months ago
- DeepSeek-V3/R1 inference performance simulator☆168Updated 7 months ago
- Summary of some awesome work for optimizing LLM inference☆138Updated 2 weeks ago
- Implement Flash Attention using Cute.☆96Updated 11 months ago
- ☆33Updated last month
- ☆14Updated 3 months ago
- ☆32Updated last year
- [HPCA 2025] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆62Updated last week
- ☆110Updated 6 months ago