ZRayZzz / flash-attention-v100Links
☆50Updated last year
Alternatives and similar repositories for flash-attention-v100
Users that are interested in flash-attention-v100 are comparing it to the libraries listed below
Sorting:
- Triton Documentation in Chinese Simplified / Triton 中文文档☆80Updated 4 months ago
- ☆79Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆60Updated 9 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆211Updated 3 weeks ago
- ☆54Updated this week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆263Updated 3 weeks ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆104Updated last month
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆112Updated last year
- An easy-to-use package for implementing SmoothQuant for LLMs☆105Updated 4 months ago
- ☆143Updated last month
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆326Updated 6 months ago
- ☆146Updated 5 months ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated 2 years ago
- ☆141Updated last year
- Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.☆100Updated this week
- FlagScale is a large model toolkit based on open-sourced projects.☆347Updated last week
- ☆93Updated 5 months ago
- ☆128Updated 8 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆278Updated last year
- UltraScale Playbook 中文版☆71Updated 5 months ago
- ☆78Updated 9 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆175Updated this week
- ☆49Updated 10 months ago
- Implementation of FlashAttention in PyTorch☆164Updated 7 months ago
- 飞桨护航计划集训营☆21Updated 3 weeks ago
- ☆138Updated last year
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆138Updated last week
- ☆98Updated 11 months ago
- 注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能☆39Updated 3 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆138Updated this week