Analyze computation-communication overlap in V3/R1.
☆1,149Mar 21, 2025Updated last year
Alternatives and similar repositories for profile-data
Users that are interested in profile-data are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Expert Parallelism Load Balancer☆1,357Mar 24, 2025Updated 11 months ago
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.☆2,934Jan 14, 2026Updated 2 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆6,268Feb 27, 2026Updated 3 weeks ago
- DeepEP: an efficient expert-parallel communication library☆9,053Feb 9, 2026Updated last month
- FlashMLA: Efficient Multi-head Latent Attention Kernels☆12,521Feb 6, 2026Updated last month
- A high-performance distributed file system designed to address the challenges of AI training and inference workloads.☆9,770Mar 9, 2026Updated 2 weeks ago
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,972May 15, 2025Updated 10 months ago
- A lightweight data processing framework built on DuckDB and 3FS.☆4,938Mar 5, 2025Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 6 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,394Mar 11, 2026Updated last week
- ☆98Apr 2, 2025Updated 11 months ago
- FlashInfer: Kernel Library for LLM Serving☆5,194Updated this week
- DeepSeek-V3/R1 inference performance simulator☆189Mar 27, 2025Updated 11 months ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,953Updated this week
- Muon is Scalable for LLM Training☆1,446Aug 3, 2025Updated 7 months ago
- Perplexity GPU Kernels☆564Nov 7, 2025Updated 4 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,231Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆945Updated this week
- Zero Bubble Pipeline Parallelism☆451May 7, 2025Updated 10 months ago
- A throughput-oriented high-performance serving framework for LLMs☆949Oct 29, 2025Updated 4 months ago
- Materials for learning SGLang☆775Jan 5, 2026Updated 2 months ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,829Updated this week
- A Datacenter Scale Distributed Inference Serving Framework☆6,347Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆466May 30, 2025Updated 9 months ago
- xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism☆2,572Updated this week
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,442Updated this week
- A lightweight design for computation-communication overlap.☆225Jan 20, 2026Updated 2 months ago
- Ongoing research training transformer models at scale☆15,744Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,403Updated this week
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,083Apr 3, 2025Updated 11 months ago
- DeeperGEMM: crazy optimized version☆75May 5, 2025Updated 10 months ago
- Ring attention implementation with flash attention☆996Sep 10, 2025Updated 6 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆676Mar 16, 2026Updated last week
- Fast and memory-efficient exact attention☆22,832Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆785Apr 6, 2025Updated 11 months ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,120Updated this week
- Accelerating MoE with IO and Tile-aware Optimizations☆606Feb 27, 2026Updated 3 weeks ago
- Efficient and easy multi-instance LLM serving☆532Mar 12, 2026Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year