feifeibear / LLMRooflineLinks
Compare different hardware platforms via the Roofline Model for LLM inference tasks.
☆110Updated last year
Alternatives and similar repositories for LLMRoofline
Users that are interested in LLMRoofline are comparing it to the libraries listed below
Sorting:
- ☆92Updated 4 months ago
- ☆145Updated 5 months ago
- ☆96Updated 10 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆103Updated 2 months ago
- A lightweight design for computation-communication overlap.☆154Updated last month
- Fast and memory-efficient exact attention☆82Updated last week
- ☆128Updated 7 months ago
- High performance Transformer implementation in C++.☆128Updated 6 months ago
- ☆60Updated 3 months ago
- ☆78Updated 3 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆61Updated last year
- ☆209Updated this week
- ☆149Updated 6 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆94Updated 3 weeks ago
- Sequence-level 1F1B schedule for LLMs.☆29Updated last month
- A simple calculation for LLM MFU.☆42Updated 5 months ago
- ☆42Updated 11 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆93Updated 2 months ago
- ☆89Updated 2 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆137Updated 3 months ago
- DeeperGEMM: crazy optimized version☆71Updated 2 months ago
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆55Updated this week
- Stateful LLM Serving☆77Updated 4 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆40Updated last month
- ☆109Updated 8 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆216Updated last year
- ☆139Updated last year
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆61Updated last week
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆79Updated 8 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆39Updated 5 months ago