zartbot / shallowsim
DeepSeek-V3/R1 inference performance simulator
☆115Updated last month
Alternatives and similar repositories for shallowsim:
Users that are interested in shallowsim are comparing it to the libraries listed below
- ☆60Updated last month
- NCCL Profiling Kit☆133Updated 10 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆78Updated 5 months ago
- Microsoft Collective Communication Library☆65Updated 5 months ago
- A lightweight design for computation-communication overlap.☆35Updated last week
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆64Updated this week
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆60Updated 11 months ago
- Ultra | Ultimate | Unified CCL☆65Updated 2 months ago
- Stateful LLM Serving☆65Updated last month
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆99Updated last year
- ☆79Updated 2 years ago
- LLM serving cluster simulator☆99Updated last year
- Thunder Research Group's Collective Communication Library☆36Updated last year
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆81Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆109Updated this week
- ☆59Updated 10 months ago
- High performance Transformer implementation in C++.☆119Updated 3 months ago
- Summary of some awesome work for optimizing LLM inference☆69Updated 3 weeks ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆345Updated this week
- Synthesizer for optimal collective communication algorithms☆106Updated last year
- PyTorch distributed training acceleration framework☆48Updated 2 months ago
- ☆23Updated 9 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆144Updated 2 years ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated last year
- LLM Inference analyzer for different hardware platforms☆64Updated this week
- ☆95Updated 5 months ago
- ☆36Updated 4 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆34Updated this week
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆155Updated 7 months ago
- ☆104Updated last month