feifeibear / LLMRoofline
Compare different hardware platforms via the Roofline Model for LLM inference tasks.
☆93Updated 11 months ago
Alternatives and similar repositories for LLMRoofline:
Users that are interested in LLMRoofline are comparing it to the libraries listed below
- ☆101Updated 6 months ago
- High performance Transformer implementation in C++.☆102Updated last month
- ☆127Updated last month
- ☆140Updated 9 months ago
- ☆81Updated 5 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆51Updated 6 months ago
- ☆67Updated 2 months ago
- ☆83Updated 3 months ago
- Fast and memory-efficient exact attention☆44Updated this week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆52Updated 2 weeks ago
- ☆62Updated 2 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆104Updated 5 months ago
- Decoding Attention is specially optimized for multi head attention (MHA) using CUDA core for the decoding stage of LLM inference.☆29Updated 3 months ago
- PyTorch distributed training acceleration framework☆39Updated last week
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆101Updated this week
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated 5 months ago
- A low-latency & high-throughput serving engine for LLMs☆312Updated 3 weeks ago
- ☆76Updated last year
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆78Updated 3 months ago
- ☆142Updated last month
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated last year
- ATC23 AE☆45Updated last year
- Curated collection of papers in MoE model inference☆64Updated this week
- Quantized Attention on GPU☆34Updated 2 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆88Updated 11 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆290Updated this week
- ☆43Updated this week
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆60Updated 8 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆64Updated 8 months ago
- Implement Flash Attention using Cute.☆69Updated 2 months ago