Tencent / hpc-opsLinks
High Performance LLM Inference Operator Library
β695Updated this week
Alternatives and similar repositories for hpc-ops
Users that are interested in hpc-ops are comparing it to the libraries listed below
Sorting:
- Examples of CUDA implementations by Cutlass CuTeβ270Updated 7 months ago
- π€FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3xβπ vs SDPA EA.β250Updated this week
- flash attention tutorial written in python, triton, cuda, cutlassβ484Updated 2 weeks ago
- β152Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttentionβ457Updated 8 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.β676Updated this week
- β113Updated 8 months ago
- A Easy-to-understand TensorOp Matmul Tutorialβ404Updated this week
- Puzzles for learning Triton, play it with minimal environment configuration!β613Updated last month
- FlagCX is a scalable and adaptive cross-chip communication library.β172Updated this week
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.β123Updated last month
- Perplexity GPU Kernelsβ554Updated 3 months ago
- High performance Transformer implementation in C++.β150Updated last year
- β342Updated last week
- A lightweight design for computation-communication overlap.β219Updated 2 weeks ago
- β‘οΈWrite HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peakβ‘οΈ Performance.β148Updated 8 months ago
- FlagGems is an operator library for large language models implemented in the Triton Language.β893Updated this week
- A tutorial for CUDA&PyTorchβ227Updated 2 weeks ago
- β105Updated last year
- β161Updated 2 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.β115Updated 6 months ago
- Tile-Based Runtime for Ultra-Low-Latency LLM Inferenceβ564Updated 2 weeks ago
- β155Updated 11 months ago
- β61Updated 6 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of β¦β313Updated 7 months ago
- β96Updated 10 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.β44Updated 11 months ago
- β175Updated 9 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of papβ¦β283Updated 11 months ago
- β145Updated last year