Starmys / TritonStudyGroupLinks
☆114Updated 3 months ago
Alternatives and similar repositories for TritonStudyGroup
Users that are interested in TritonStudyGroup are comparing it to the libraries listed below
Sorting:
- Puzzles for learning Triton, play it with minimal environment configuration!☆590Updated 2 weeks ago
- flash attention tutorial written in python, triton, cuda, cutlass☆473Updated 8 months ago
- ☆150Updated 6 months ago
- Examples of CUDA implementations by Cutlass CuTe☆266Updated 6 months ago
- The repository has collected a batch of noteworthy MLSys bloggers (Algorithms/Systems)☆315Updated last year
- ☆126Updated 4 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆114Updated 6 months ago
- Implement Flash Attention using Cute.☆100Updated last year
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆626Updated this week
- Summary of some awesome work for optimizing LLM inference☆162Updated last month
- 注释的nano_vllm仓库,并且完成了MiniCPM4的适配以及注册新模型的功能☆139Updated 5 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆304Updated 7 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆242Updated last month
- A minimalist and extensible PyTorch extension for implementing custom backend operators in PyTorch.☆38Updated last year
- ☆176Updated 2 years ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆145Updated 3 weeks ago
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"☆205Updated last month
- ☆153Updated 10 months ago
- learning how CUDA works☆366Updated 10 months ago
- Code release for book "Efficient Training in PyTorch"☆119Updated 9 months ago
- Triton Documentation in Chinese Simplified / Triton 中文文档☆97Updated 3 weeks ago
- From Minimal GEMM to Everything☆95Updated 2 weeks ago
- ☆112Updated 7 months ago
- Flash Attention from Scratch on CUDA Ampere☆115Updated 4 months ago
- ☆45Updated last year
- FlagGems is an operator library for large language models implemented in the Triton Language.☆824Updated last week
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆282Updated 10 months ago
- A PyTorch-like deep learning framework. Just for fun.☆157Updated 2 years ago
- ☆104Updated last year