pprp / ultrascale-playbook-zh
UltraScale Playbook 中文版
☆35Updated last month
Alternatives and similar repositories for ultrascale-playbook-zh:
Users that are interested in ultrascale-playbook-zh are comparing it to the libraries listed below
- Code release for book "Efficient Training in PyTorch"☆60Updated 2 weeks ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆97Updated last year
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆85Updated 3 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆262Updated 10 months ago
- ☆82Updated last month
- ☆92Updated 7 months ago
- ☆131Updated last month
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆243Updated last week
- learning how CUDA works☆238Updated last month
- PyTorch bindings for CUTLASS grouped GEMM.☆120Updated 3 months ago
- Transformer related optimization, including BERT, GPT☆59Updated last year
- Triton Documentation in Chinese Simplified / Triton 中文文档☆66Updated last week
- ☆139Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆50Updated 5 months ago
- ☆121Updated this week
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆39Updated 8 months ago
- ☆48Updated this week
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆116Updated 2 weeks ago
- Examples of CUDA implementations by Cutlass CuTe☆159Updated 2 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆52Updated 8 months ago
- ☆127Updated 4 months ago
- A light llama-like llm inference framework based on the triton kernel.☆108Updated last week
- ☆78Updated last year
- ☆148Updated 3 months ago
- ☆88Updated 3 weeks ago
- Transformer related optimization, including BERT, GPT☆17Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆63Updated 8 months ago
- An industrial extension library of pytorch to accelerate large scale model training☆32Updated 2 months ago
- 📚FFPA(Split-D): Yet another Faster Flash Attention with O(1) GPU SRAM complexity large headdim, 1.8x~3x↑🎉 faster than SDPA EA.☆169Updated 2 weeks ago