wassemgtk / MegaScale-Infer-PrototypLinks
Prototyp MegaScale-Infer: Serving Mixture-of-Experts at Scale with Disaggregated Expert Parallelism
☆26Updated 10 months ago
Alternatives and similar repositories for MegaScale-Infer-Prototyp
Users that are interested in MegaScale-Infer-Prototyp are comparing it to the libraries listed below
Sorting:
- ☆84Updated 3 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆278Updated last week
- ☆150Updated last year
- Distributed MoE in a Single Kernel [NeurIPS '25]☆190Updated this week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆141Updated last year
- Nex Venus Communication Library☆72Updated 2 months ago
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆87Updated 2 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆71Updated 4 months ago
- Accepted to MLSys 2026☆70Updated last week
- A lightweight design for computation-communication overlap.☆219Updated 2 weeks ago
- Building the Virtuous Cycle for AI-driven LLM Systems☆151Updated this week
- LLM Serving Performance Evaluation Harness☆83Updated 11 months ago
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆87Updated 2 months ago
- ☆131Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆209Updated last year
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆99Updated last month
- Stateful LLM Serving☆95Updated 10 months ago
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆59Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆124Updated 4 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆161Updated 4 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆69Updated last year
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆34Updated last year
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Updated last month
- Tile-based language built for AI computation across all scales☆119Updated last week
- High performance Transformer implementation in C++.☆150Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆160Updated 3 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆176Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆233Updated 2 years ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆312Updated 7 months ago
- High-performance distributed data shuffling (all-to-all) library for MoE training and inference☆111Updated last month