ZRayZzz / flash-attention-v100Links
☆45Updated last year
Alternatives and similar repositories for flash-attention-v100
Users that are interested in flash-attention-v100 are comparing it to the libraries listed below
Sorting:
- Triton Documentation in Chinese Simplified / Triton 中文文档☆71Updated 2 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆57Updated 7 months ago
- ☆79Updated last year
- ⚡️FFPA: Extend FlashAttention-2 with Split-D, achieve ~O(1) SRAM complexity for large headdim, 1.8x~3x↑ vs SDPA.☆186Updated last month
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆46Updated 3 months ago
- ☆139Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆317Updated 4 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆38Updated 2 weeks ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆102Updated 2 months ago
- ☆51Updated 2 weeks ago
- ☆87Updated 3 months ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆64Updated last year
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆96Updated last week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆256Updated 3 weeks ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- Implement Flash Attention using Cute.☆87Updated 6 months ago
- ☆128Updated 6 months ago
- ☆24Updated 3 weeks ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆115Updated last year
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated last year
- A practical way of learning Swizzle☆20Updated 4 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆38Updated 4 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- ☆28Updated last month
- ☆137Updated last month
- A simple calculation for LLM MFU.☆38Updated 3 months ago
- ☆97Updated 9 months ago
- ☆141Updated 3 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆48Updated 3 months ago
- ☆135Updated last year