Triton Documentation in Chinese Simplified / Triton 中文文档
☆107Mar 5, 2026Updated 3 weeks ago
Alternatives and similar repositories for triton-cn
Users that are interested in triton-cn are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆19Dec 24, 2024Updated last year
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆43Oct 20, 2023Updated 2 years ago
- ☆13Jan 7, 2025Updated last year
- Fast and memory-efficient exact attention☆21Mar 13, 2026Updated 2 weeks ago
- flash attention tutorial written in python, triton, cuda, cutlass☆494Jan 20, 2026Updated 2 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆19May 30, 2024Updated last year
- Triton adapter for Ascend. Mirror of https://gitcode.com/ascend/triton-ascend☆115Updated this week
- Graph model execution API for Candle☆17Jul 27, 2025Updated 8 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆164Oct 13, 2025Updated 5 months ago
- SGLang Kernel Wheel Index☆17Updated this week
- [ICML 2024] Official Repository for the paper "Transformers Get Stable: An End-to-End Signal Propagation Theory for Language Models"☆10Jul 19, 2024Updated last year
- ☆10Jul 18, 2024Updated last year
- PTX ISA 9.1 documentation converted to searchable markdown. Includes Claude Code skill for CUDA development.☆45Dec 24, 2025Updated 3 months ago
- ☆85Apr 18, 2025Updated 11 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Low overhead tracing library and trace visualizer for pipelined CUDA kernels☆134Nov 26, 2025Updated 4 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆54Mar 15, 2025Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Sep 10, 2024Updated last year
- Codes & examples for "CUDA - From Correctness to Performance"☆124Oct 24, 2024Updated last year
- ☆78Nov 26, 2024Updated last year
- Herald: Accelerating Neural Recommendation Training with Embedding Scheduling (NSDI 2024)☆23May 9, 2024Updated last year
- Less Is More: Training-Free Sparse Attention with Global Locality for Efficient Reasoning☆29Sep 12, 2025Updated 6 months ago
- PyTorch distributed training acceleration framework☆54Aug 13, 2025Updated 7 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ☆159Dec 26, 2024Updated last year
- A practical way of learning Swizzle☆37Feb 3, 2025Updated last year
- Benchmark Framework for Buddy Projects☆55Oct 31, 2025Updated 4 months ago
- Handwritten GEMM using Intel AMX (Advanced Matrix Extension)☆17Jan 11, 2025Updated last year
- High performance RMSNorm Implement by using SM Core Storage(Registers and Shared Memory)☆30Jan 22, 2026Updated 2 months ago
- A record of reading list on some MLsys popular topic☆23Mar 20, 2025Updated last year
- ☆20Jun 9, 2025Updated 9 months ago
- Download Huggingface repositories without the need to install dependencies☆22Jul 30, 2025Updated 7 months ago
- A PyTorch native platform for training generative AI models☆16Nov 18, 2025Updated 4 months ago
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- HunyuanDiT with TensorRT and libtorch☆18May 22, 2024Updated last year
- FSANet: 1 Mb!! Head Pose Estimation with MNN、TNN and ONNXRuntime C++.☆17Feb 4, 2022Updated 4 years ago
- JAX bindings for the flash-attention3 kernels☆21Jan 2, 2026Updated 2 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆132Jun 24, 2025Updated 9 months ago
- Official Implementation for [ICLR26] DefensiveKV: Taming the Fragility of KV Cache Eviction in LLM Inference☆31Mar 19, 2026Updated last week
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆321Jun 10, 2025Updated 9 months ago
- ☆22May 5, 2025Updated 10 months ago