LearningInfiniTensor / handoutLinks
训练营讲义
☆21Updated 10 months ago
Alternatives and similar repositories for handout
Users that are interested in handout are comparing it to the libraries listed below
Sorting:
- ☆125Updated last month
- easy cuda code☆90Updated 11 months ago
- ☆29Updated last month
- 笔记☆47Updated 3 months ago
- ☆63Updated 10 months ago
- ☆274Updated last month
- 分层解耦的深度学习推理引擎☆76Updated 9 months ago
- FlagTree is a unified compiler for multiple AI chips, which is forked from triton-lang/triton.☆137Updated last week
- 算子库☆17Updated 4 months ago
- Tiny C++ LLM inference implementation from scratch☆95Updated last week
- Free resource for the book AI Compiler Development Guide☆47Updated 2 years ago
- 先进编译实验室的个人主页☆174Updated last month
- Large-scale Auto-Distributed Training/Inference Unified Framework | Memory-Compute-Control Decoupled Architecture | Multi-language SDK & …☆55Updated 4 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆94Updated 2 weeks ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆62Updated last year
- ☆70Updated 2 years ago
- some hpc project for learning☆25Updated last year
- 《自己动手写AI编译器》☆31Updated last year
- My study note for mlsys☆16Updated last year
- Gensis is a lightweight deep learning framework written from scratch in Python, with Triton as its backend for high-performance computing…☆38Updated 2 weeks ago
- LeetGPU Solutions☆86Updated last month
- ☆65Updated last year
- ☆64Updated last week
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆233Updated 2 weeks ago
- A domain-specific language (DSL) based on Triton but providing higher-level abstractions.☆36Updated last week
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆206Updated last month
- Machine Learning Compiler Road Map☆45Updated 2 years ago
- Codes & examples for "CUDA - From Correctness to Performance"☆117Updated last year
- LLM Inference via Triton (Flexible & Modular): Focused on Kernel Optimization using CUBIN binaries, Starting from gpt-oss Model☆56Updated last month
- 注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能☆108Updated 3 months ago