Estimate MFU for DeepSeekV3
☆26Jan 5, 2025Updated last year
Alternatives and similar repositories for DPSKV3MFU
Users that are interested in DPSKV3MFU are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 8 months ago
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Updated this week
- ☆13Jun 18, 2024Updated last year
- Ongoing research training transformer models at scale☆18Apr 9, 2026Updated 3 weeks ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆66Apr 26, 2025Updated last year
- a simple API to use CUPTI☆10Aug 19, 2025Updated 8 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- ☆22Dec 18, 2024Updated last year
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Nov 18, 2024Updated last year
- A Top-Down Profiler for GPU Applications☆22Feb 29, 2024Updated 2 years ago
- ☆20Updated this week
- SGLang Kernel Wheel Index☆22Apr 21, 2026Updated last week
- ☆32Jul 2, 2025Updated 9 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆26Apr 8, 2026Updated 3 weeks ago
- Flash-Linear-Attention models beyond language☆21Aug 28, 2025Updated 8 months ago
- The official repository of the Omni-MATH benchmark.☆93Dec 22, 2024Updated last year
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆34Nov 29, 2024Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆92Oct 30, 2024Updated last year
- A parallelism VAE avoids OOM for high resolution image generation☆91Apr 21, 2026Updated last week
- ☆17May 10, 2024Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆248Jun 15, 2025Updated 10 months ago
- [ICML 2024] Code for the paper "MoE-RBench: Towards Building Reliable Language Models with Sparse Mixture-of-Experts"☆10Jul 1, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Award, Honorable Mention]☆56Mar 5, 2025Updated last year
- ☆52May 19, 2025Updated 11 months ago
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated 11 months ago
- Sequence-level 1F1B schedule for LLMs.☆19Jun 4, 2024Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆133Jun 24, 2025Updated 10 months ago
- Tiny-Megatron, a minimalistic re-implementation of the Megatron library☆25Sep 1, 2025Updated 7 months ago
- ☆11Apr 3, 2023Updated 3 years ago
- PaperHelper: Knowledge-Based LLM QA Paper Reading Assistant with Reliable References☆21Jun 13, 2024Updated last year
- Fastest kernels written from scratch☆574Sep 18, 2025Updated 7 months ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆180Feb 11, 2026Updated 2 months ago
- [WIP] Better (FP8) attention for Hopper☆33Feb 24, 2025Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆151Sep 12, 2025Updated 7 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆125Dec 25, 2025Updated 4 months ago
- APEX+ is an LLM Serving Simulator☆45Jun 16, 2025Updated 10 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 9 months ago
- ☆21Jun 4, 2024Updated last year