CalvinXKY / mfu_calculation
A simple calculation for LLM MFU.
☆29Updated 3 weeks ago
Alternatives and similar repositories for mfu_calculation:
Users that are interested in mfu_calculation are comparing it to the libraries listed below
- GPTQ inference TVM kernel☆38Updated 11 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated last year
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆43Updated 4 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆68Updated 9 months ago
- Estimate MFU for DeepSeekV3☆21Updated 2 months ago
- ☆65Updated 3 months ago
- ☆90Updated 6 months ago
- ☆52Updated this week
- ☆90Updated 4 months ago
- ☆52Updated last year
- DeeperGEMM: crazy optimized version☆63Updated 2 weeks ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆62Updated 2 weeks ago
- ☆19Updated 6 months ago
- Quantized Attention on GPU☆45Updated 4 months ago
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆32Updated 2 weeks ago
- ☆81Updated 2 years ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆51Updated 8 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆111Updated 2 months ago
- nnScaler: Compiling DNN models for Parallel Training☆103Updated last month
- PyTorch bindings for CUTLASS grouped GEMM.☆77Updated 5 months ago
- High performance Transformer implementation in C++.☆111Updated 2 months ago
- Implement Flash Attention using Cute.☆74Updated 3 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated last year
- Summary of some awesome work for optimizing LLM inference☆66Updated this week
- ☆75Updated this week
- ☆72Updated 3 years ago
- ☆32Updated 7 months ago
- ☆125Updated 3 weeks ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆63Updated this week
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆153Updated 8 months ago