LLMServe / dLoRA-artifact
☆14Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for dLoRA-artifact
- Compiler for Dynamic Neural Networks☆43Updated last year
- ☆23Updated last year
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆132Updated last month
- ☆37Updated 3 years ago
- ☆41Updated last year
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆47Updated last year
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆85Updated last year
- ☆56Updated 2 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆52Updated 6 months ago
- LLM serving cluster simulator☆81Updated 6 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆78Updated last year
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆57Updated 5 months ago
- ☆48Updated last year
- ☆52Updated last week
- ☆8Updated 2 years ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆8Updated 8 months ago
- ☆23Updated 2 years ago
- Dorylus: Affordable, Scalable, and Accurate GNN Training☆77Updated 3 years ago
- Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction | A tiny model can tell you the verbosity of an LLM (…☆22Updated 5 months ago
- An experimental parallel training platform☆52Updated 7 months ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆14Updated 11 months ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆39Updated 2 years ago
- An interference-aware scheduler for fine-grained GPU sharing☆110Updated 6 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆79Updated 4 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆57Updated 6 months ago
- ☆73Updated last year
- ☆12Updated 6 months ago
- Artifacts for our SIGCOMM'22 paper Muri☆40Updated 10 months ago
- ☆46Updated 5 months ago
- ☆51Updated 3 years ago