feifeibear / LLMRoofline
Compare different hardware platforms via the Roofline Model for LLM inference tasks.
☆75Updated 8 months ago
Related projects ⓘ
Alternatives and complementary repositories for LLMRoofline
- Materials for learning SGLang☆96Updated this week
- ☆123Updated 2 weeks ago
- ☆140Updated 6 months ago
- ☆64Updated 3 months ago
- High performance Transformer implementation in C++.☆82Updated 2 months ago
- ☆79Updated 2 months ago
- A low-latency & high-throughput serving engine for LLMs☆245Updated 2 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆98Updated 2 months ago
- ☆138Updated 2 weeks ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆85Updated 8 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆238Updated last week
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆45Updated 3 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆29Updated 2 months ago
- Efficient and easy multi-instance LLM serving☆213Updated this week
- A collection of memory efficient attention operators implemented in the Triton language.☆219Updated 5 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆87Updated last month
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated 11 months ago
- Decoding Attention is specially optimized for multi head attention (MHA) using CUDA core for the decoding stage of LLM inference.☆23Updated 2 weeks ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆175Updated 2 weeks ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆57Updated 5 months ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆311Updated 2 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆52Updated 3 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆104Updated 4 months ago
- Fast and memory-efficient exact attention☆30Updated 3 weeks ago
- Summary of some awesome work for optimizing LLM inference☆37Updated 2 weeks ago
- ☆74Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆53Updated 3 weeks ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆75Updated this week
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆33Updated 2 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆114Updated 2 months ago