Estimate MFU for DeepSeekV3
☆26Jan 5, 2025Updated last year
Alternatives and similar repositories for DPSKV3MFU
Users that are interested in DPSKV3MFU are comparing it to the libraries listed below
Sorting:
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆19Aug 3, 2025Updated 7 months ago
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- ☆13Jun 18, 2024Updated last year
- ☆65Apr 26, 2025Updated 10 months ago
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 10 months ago
- Kernel Library Wheel for SGLang☆16Updated this week
- ☆23Dec 18, 2024Updated last year
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Nov 18, 2024Updated last year
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆25Sep 23, 2025Updated 5 months ago
- ☆16Feb 24, 2026Updated 3 weeks ago
- ☆32Jul 2, 2025Updated 8 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- Flash-Linear-Attention models beyond language☆21Aug 28, 2025Updated 6 months ago
- ☆137Aug 18, 2025Updated 7 months ago
- A parallelism VAE avoids OOM for high resolution image generation☆89Mar 12, 2026Updated last week
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆242Jun 15, 2025Updated 9 months ago
- ☆17May 10, 2024Updated last year
- [ICML 2024] Code for the paper "MoE-RBench: Towards Building Reliable Language Models with Sparse Mixture-of-Experts"☆10Jul 1, 2024Updated last year
- Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Award, Honorable Mention]☆57Mar 5, 2025Updated last year
- ☆52May 19, 2025Updated 10 months ago
- Zero Bubble Pipeline Parallelism☆451May 7, 2025Updated 10 months ago
- Sequence-level 1F1B schedule for LLMs.☆19Jun 4, 2024Updated last year
- Pipeline Parallelism Emulation and Visualization☆80Jan 8, 2026Updated 2 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆131Jun 24, 2025Updated 8 months ago
- Tiny-Megatron, a minimalistic re-implementation of the Megatron library☆23Sep 1, 2025Updated 6 months ago
- ☆11Apr 3, 2023Updated 2 years ago
- PaperHelper: Knowledge-Based LLM QA Paper Reading Assistant with Reliable References☆21Jun 13, 2024Updated last year
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆25Mar 29, 2024Updated last year
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆170Feb 11, 2026Updated last month
- Fastest kernels written from scratch☆559Sep 18, 2025Updated 6 months ago
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆148Sep 12, 2025Updated 6 months ago
- APEX+ is an LLM Serving Simulator☆44Jun 16, 2025Updated 9 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Dec 25, 2025Updated 2 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 8 months ago
- Introduction about AWESOME_ENTROPY+LRM_PAPERS☆30Dec 16, 2025Updated 3 months ago
- A lightweight design for computation-communication overlap.☆225Jan 20, 2026Updated 2 months ago
- Perplexity GPU Kernels☆566Nov 7, 2025Updated 4 months ago