wassemgtk / MegaScale-Infer-PrototypLinks
Prototyp MegaScale-Infer: Serving Mixture-of-Experts at Scale with Disaggregated Expert Parallelism
☆26Updated 10 months ago
Alternatives and similar repositories for MegaScale-Infer-Prototyp
Users that are interested in MegaScale-Infer-Prototyp are comparing it to the libraries listed below
Sorting:
- ☆84Updated 3 months ago
- Accepted to MLSys 2026☆70Updated last week
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆279Updated this week
- ☆150Updated last year
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆71Updated 4 months ago
- ☆131Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆141Updated last year
- A lightweight design for computation-communication overlap.☆219Updated 2 weeks ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆135Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆209Updated last year
- ☆342Updated last week
- nnScaler: Compiling DNN models for Parallel Training☆124Updated 4 months ago
- Stateful LLM Serving☆95Updated 10 months ago
- LLM Serving Performance Evaluation Harness☆83Updated 11 months ago
- ☆43Updated last year
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆147Updated last month
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆58Updated last year
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆69Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆335Updated last year
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆34Updated last year
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆87Updated 2 months ago
- ☆48Updated last year
- High performance Transformer implementation in C++.☆150Updated last year
- Distributed MoE in a Single Kernel [NeurIPS '25]☆190Updated last week
- ☆89Updated 3 years ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆176Updated last year
- A resilient distributed training framework☆96Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆233Updated 2 years ago
- Nex Venus Communication Library☆72Updated 2 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆457Updated 8 months ago