Estimate MFU for DeepSeekV3
☆26Jan 5, 2025Updated last year
Alternatives and similar repositories for DPSKV3MFU
Users that are interested in DPSKV3MFU are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 8 months ago
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Mar 25, 2026Updated 2 weeks ago
- ☆13Jun 18, 2024Updated last year
- Ongoing research training transformer models at scale☆18Updated this week
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆65Apr 26, 2025Updated 11 months ago
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- ☆22Dec 18, 2024Updated last year
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Nov 18, 2024Updated last year
- A Top-Down Profiler for GPU Applications☆22Feb 29, 2024Updated 2 years ago
- ☆17Feb 24, 2026Updated last month
- SGLang Kernel Wheel Index☆18Updated this week
- ☆32Jul 2, 2025Updated 9 months ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆26Sep 23, 2025Updated 6 months ago
- Flash-Linear-Attention models beyond language☆21Aug 28, 2025Updated 7 months ago
- The official repository of the Omni-MATH benchmark.☆93Dec 22, 2024Updated last year
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆34Nov 29, 2024Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆92Oct 30, 2024Updated last year
- ☆140Aug 18, 2025Updated 7 months ago
- A parallelism VAE avoids OOM for high resolution image generation☆89Mar 12, 2026Updated 3 weeks ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆249Jun 15, 2025Updated 9 months ago
- [ICML 2024] Code for the paper "MoE-RBench: Towards Building Reliable Language Models with Sparse Mixture-of-Experts"☆10Jul 1, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Artifact for "Marconi: Prefix Caching for the Era of Hybrid LLMs" [MLSys '25 Outstanding Paper Award, Honorable Mention]☆56Mar 5, 2025Updated last year
- ☆52May 19, 2025Updated 10 months ago
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated 11 months ago
- Sequence-level 1F1B schedule for LLMs.☆19Jun 4, 2024Updated last year
- Pipeline Parallelism Emulation and Visualization☆81Jan 8, 2026Updated 3 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆132Jun 24, 2025Updated 9 months ago
- Tiny-Megatron, a minimalistic re-implementation of the Megatron library☆23Sep 1, 2025Updated 7 months ago
- PaperHelper: Knowledge-Based LLM QA Paper Reading Assistant with Reliable References☆21Jun 13, 2024Updated last year
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆25Mar 29, 2024Updated 2 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Fastest kernels written from scratch☆565Sep 18, 2025Updated 6 months ago
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated last year
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆150Sep 12, 2025Updated 6 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆125Dec 25, 2025Updated 3 months ago
- APEX+ is an LLM Serving Simulator☆44Jun 16, 2025Updated 9 months ago
- ☆21Jun 4, 2024Updated last year
- Introduction about AWESOME_ENTROPY+LRM_PAPERS☆30Dec 16, 2025Updated 3 months ago