hyperai / triton-cnLinks
Triton Documentation in Chinese Simplified / Triton 中文文档
☆71Updated 2 months ago
Alternatives and similar repositories for triton-cn
Users that are interested in triton-cn are comparing it to the libraries listed below
Sorting:
- 使用 CUDA C++ 实现的 llama 模型推理框架☆57Updated 7 months ago
- Implement Flash Attention using Cute.☆87Updated 6 months ago
- ☆141Updated 3 months ago
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆46Updated 3 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆96Updated last week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- ☆137Updated last month
- ⚡️FFPA: Extend FlashAttention-2 with Split-D, achieve ~O(1) SRAM complexity for large headdim, 1.8x~3x↑ vs SDPA.☆186Updated last month
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆80Updated last month
- A light llama-like llm inference framework based on the triton kernel.☆128Updated last week
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆87Updated last month
- A practical way of learning Swizzle☆20Updated 4 months ago
- ☆96Updated 9 months ago
- ☆135Updated last year
- Examples of CUDA implementations by Cutlass CuTe☆197Updated 4 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆38Updated 2 weeks ago
- ☆139Updated last year
- ☆45Updated last year
- ☆148Updated 5 months ago
- ☆86Updated 2 months ago
- UltraScale Playbook 中文版☆43Updated 3 months ago
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆53Updated 7 months ago
- Summary of some awesome work for optimizing LLM inference☆77Updated 3 weeks ago
- Implement custom operators in PyTorch with cuda/c++☆63Updated 2 years ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆71Updated 10 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆38Updated 3 months ago
- ☆69Updated this week
- ☆39Updated last year
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆183Updated 4 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆129Updated last week