hyperai / triton-cnView external linksLinks
Triton Documentation in Chinese Simplified / Triton 中文文档
☆103Dec 17, 2025Updated last month
Alternatives and similar repositories for triton-cn
Users that are interested in triton-cn are comparing it to the libraries listed below
Sorting:
- ☆13Jan 7, 2025Updated last year
- ☆20Dec 24, 2024Updated last year
- flash attention tutorial written in python, triton, cuda, cutlass☆486Jan 20, 2026Updated 3 weeks ago
- ☆10Jul 18, 2024Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- Kernel Library Wheel for SGLang☆17Updated this week
- Fast and memory-efficient exact attention☆18Jan 23, 2026Updated 3 weeks ago
- Low overhead tracing library and trace visualizer for pipelined CUDA kernels☆130Nov 26, 2025Updated 2 months ago
- ☆85Apr 18, 2025Updated 9 months ago
- JAX bindings for the flash-attention3 kernels☆20Jan 2, 2026Updated last month
- A record of reading list on some MLsys popular topic☆21Mar 20, 2025Updated 10 months ago
- deepstream + cuda,yolo26,yolo-master,yolo11,yolov8,sam,transformer, etc.☆35Updated this week
- Graph model execution API for Candle☆17Jul 27, 2025Updated 6 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆160Oct 13, 2025Updated 4 months ago
- Tile-based language built for AI computation across all scales☆120Updated this week
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Sep 13, 2025Updated 5 months ago
- ☆20Jun 9, 2025Updated 8 months ago
- Less Is More: Training-Free Sparse Attention with Global Locality for Efficient Reasoning☆29Sep 12, 2025Updated 5 months ago
- PyTorch implementation of the Flash Spectral Transform Unit.☆21Sep 19, 2024Updated last year
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆314Jun 10, 2025Updated 8 months ago
- FSANet: 1 Mb!! Head Pose Estimation with MNN、TNN and ONNXRuntime C++.☆17Feb 4, 2022Updated 4 years ago
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year
- ☆22May 5, 2025Updated 9 months ago
- ☆27Jan 7, 2025Updated last year
- ☆79Nov 26, 2024Updated last year
- A domain-specific language (DSL) based on Triton but providing higher-level abstractions.☆41Feb 4, 2026Updated last week
- ☆158Dec 26, 2024Updated last year
- 一个轻量化的大模型推理框架☆21May 26, 2025Updated 8 months ago
- Whisper in TensorRT-LLM☆17Sep 21, 2023Updated 2 years ago
- 基于 CUDA Driver API 的 cuda 运行时环境☆15Jul 30, 2025Updated 6 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 6 months ago
- A repo for llm on ncnn☆189Feb 2, 2026Updated last week
- PyTorch bindings for CUTLASS grouped GEMM.☆143May 29, 2025Updated 8 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Feb 29, 2024Updated last year
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆617Sep 11, 2024Updated last year
- Codes & examples for "CUDA - From Correctness to Performance"☆123Oct 24, 2024Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Aug 12, 2024Updated last year
- Deferred Continuous Batching in Resource-Efficient Large Language Model Serving (EuroMLSys 2024)☆19May 28, 2024Updated last year