ZRayZzz / flash-attention-v100Links
☆38Updated last year
Alternatives and similar repositories for flash-attention-v100
Users that are interested in flash-attention-v100 are comparing it to the libraries listed below
Sorting:
- Triton Documentation in Chinese Simplified / Triton 中文文档☆71Updated last month
- 使用 CUDA C++ 实现的 llama 模型推理框架☆57Updated 6 months ago
- ☆79Updated last year
- 📚FFPA(Split-D): Extend FlashAttention with Split-D for large headdim, O(1) GPU SRAM complexity, 1.8x~3x↑🎉 faster than SDPA EA.☆184Updated 3 weeks ago
- Transformer related optimization, including BERT, GPT☆17Updated last year
- ☆138Updated 3 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆254Updated last week
- ☆16Updated last year
- ☆131Updated last month
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆51Updated 2 weeks ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆133Updated 2 weeks ago
- simplify >2GB large onnx model☆57Updated 6 months ago
- ☆49Updated last week
- ☆139Updated last year
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆92Updated last week
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆114Updated last year
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆102Updated last week
- ☆134Updated last year
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆53Updated 6 months ago
- ☆96Updated 8 months ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆271Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆203Updated 2 weeks ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆63Updated last year
- ☆127Updated 5 months ago
- ☆76Updated last month
- ☆85Updated 2 months ago
- A simple calculation for LLM MFU.☆38Updated 3 months ago