CalvinXKY / mfu_calculationLinks
A simple calculation for LLM MFU.
☆44Updated this week
Alternatives and similar repositories for mfu_calculation
Users that are interested in mfu_calculation are comparing it to the libraries listed below
Sorting:
- Estimate MFU for DeepSeekV3☆24Updated 8 months ago
- ☆78Updated 4 months ago
- ☆94Updated 5 months ago
- ☆147Updated 6 months ago
- Utility scripts for PyTorch (e.g. Memory profiler that understands more low-level allocations such as NCCL)☆50Updated last month
- ☆86Updated 3 years ago
- ☆42Updated last year
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆120Updated last week
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆60Updated last week
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆59Updated 10 months ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆111Updated 5 months ago
- ☆63Updated 4 months ago
- Toolchain built around the Megatron-LM for Distributed Training☆64Updated last week
- ☆50Updated 3 months ago
- ☆98Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆145Updated 2 weeks ago
- An experimental communicating attention kernel based on DeepEP.☆34Updated last month
- DeeperGEMM: crazy optimized version☆70Updated 4 months ago
- Allow torch tensor memory to be released and resumed later☆126Updated 2 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆116Updated 3 months ago
- A lightweight design for computation-communication overlap.☆165Updated this week
- ☆55Updated last year
- ☆18Updated this week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆112Updated last year
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆48Updated last year
- GPTQ inference TVM kernel☆40Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆118Updated 2 weeks ago
- Odysseus: Playground of LLM Sequence Parallelism☆77Updated last year
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆61Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆81Updated last week