pprp / ultrascale-playbook-zhLinks
UltraScale Playbook 中文版
☆109Updated 9 months ago
Alternatives and similar repositories for ultrascale-playbook-zh
Users that are interested in ultrascale-playbook-zh are comparing it to the libraries listed below
Sorting:
- 注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能☆134Updated 4 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆113Updated 5 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆615Updated this week
- FlagScale is a large model toolkit based on open-sourced projects.☆431Updated this week
- ☆153Updated 10 months ago
- learning how CUDA works☆362Updated 10 months ago
- Code release for book "Efficient Training in PyTorch"☆118Updated 8 months ago
- Materials for learning SGLang☆709Updated 3 weeks ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆584Updated last week
- ☆518Updated last month
- ☆114Updated 3 months ago
- ☆150Updated 6 months ago
- The repository has collected a batch of noteworthy MLSys bloggers (Algorithms/Systems)☆311Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆272Updated 5 months ago
- LLM training technologies developed by kwai☆68Updated last month
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆602Updated last year
- FlagGems is an operator library for large language models implemented in the Triton Language.☆824Updated this week
- how to learn PyTorch and OneFlow☆468Updated last year
- A light llama-like llm inference framework based on the triton kernel.☆167Updated this week
- 青稞Talk☆181Updated this week
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year
- ☆141Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆758Updated 9 months ago
- This repository organizes materials, recordings, and schedules related to AI-infra learning meetings.☆288Updated this week
- flash attention tutorial written in python, triton, cuda, cutlass☆471Updated 7 months ago
- Examples of CUDA implementations by Cutlass CuTe☆264Updated 6 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆96Updated 3 weeks ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆242Updated last month
- Efficient Mixture of Experts for LLM Paper List☆154Updated 3 months ago
- A minimalist and extensible PyTorch extension for implementing custom backend operators in PyTorch.☆38Updated last year