hyperai / triton-cnLinks
Triton Documentation in Chinese Simplified / Triton 中文文档
☆99Updated last month
Alternatives and similar repositories for triton-cn
Users that are interested in triton-cn are comparing it to the libraries listed below
Sorting:
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆246Updated last week
- 使用 CUDA C++ 实现的 llama 模型推理框架☆64Updated last year
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆114Updated 6 months ago
- Implement custom operators in PyTorch with cuda/c++☆76Updated 3 years ago
- ☆155Updated 10 months ago
- ☆113Updated 2 weeks ago
- ☆152Updated 6 months ago
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆102Updated this week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆145Updated 8 months ago
- Implement Flash Attention using Cute.☆100Updated last year
- 注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能☆147Updated 5 months ago
- ☆96Updated 10 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Updated last year
- A light llama-like llm inference framework based on the triton kernel.☆169Updated 3 weeks ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Updated last year
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆219Updated last week
- LLM Inference via Triton (Flexible & Modular): Focused on Kernel Optimization using CUBIN binaries, Starting from gpt-oss Model☆63Updated 3 months ago
- ☆144Updated last year
- ☆284Updated this week
- ☆141Updated last year
- ☆152Updated last year
- ☆105Updated last year
- ☆61Updated 6 months ago
- 分层解耦的深度学习推理引擎☆79Updated 11 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Updated 11 months ago
- ☆112Updated 8 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Updated 7 months ago
- ☆116Updated 4 months ago
- ☆47Updated last year
- ☆130Updated last year