alibaba / llm-scheduling-artifact
Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“
☆57Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for llm-scheduling-artifact
- High performance Transformer implementation in C++.☆82Updated 2 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆57Updated 6 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆78Updated last year
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆132Updated last month
- ☆56Updated 2 years ago
- An interference-aware scheduler for fine-grained GPU sharing☆110Updated 6 months ago
- ☆52Updated last week
- Compiler for Dynamic Neural Networks☆43Updated last year
- ☆73Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆101Updated 8 months ago
- ☆14Updated 5 months ago
- Stateful LLM Serving☆38Updated 3 months ago
- Efficient and easy multi-instance LLM serving☆213Updated this week
- nnScaler: Compiling DNN models for Parallel Training☆74Updated 3 weeks ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆52Updated 6 months ago
- LLM serving cluster simulator☆81Updated 6 months ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆54Updated 3 months ago
- ☆46Updated 5 months ago
- A resilient distributed training framework☆85Updated 7 months ago
- An experimental parallel training platform☆52Updated 7 months ago
- ☆23Updated last year
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆75Updated this week
- ☆12Updated 6 months ago
- Here are my personal paper reading notes (including cloud computing, resource management, systems, machine learning, deep learning, and o…☆50Updated 2 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆114Updated 2 months ago
- Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction | A tiny model can tell you the verbosity of an LLM (…☆22Updated 5 months ago
- A low-latency & high-throughput serving engine for LLMs☆245Updated 2 months ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆39Updated 2 years ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆14Updated 11 months ago
- ☆15Updated 4 months ago