vuhpdc / jellyfishLinks
Source code for Jellyfish, a soft real-time inference serving system
☆14Updated 3 years ago
Alternatives and similar repositories for jellyfish
Users that are interested in jellyfish are comparing it to the libraries listed below
Sorting:
- ☆213Updated last year
- a deep learning-driven scheduler for elastic training in deep learning clusters☆31Updated 4 years ago
- Source code and datasets for Ekya, a system for continuous learning on the edge.☆112Updated 3 years ago
- ☆22Updated last year
- HeliosArtifact☆22Updated 3 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 3 years ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆12Updated last year
- ☆15Updated last year
- ☆57Updated 4 years ago
- Lucid: A Non-Intrusive, Scalable and Interpretable Scheduler for Deep Learning Training Jobs☆58Updated 2 years ago
- This is a list of awesome edgeAI inference related papers.☆98Updated 2 years ago
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆46Updated 3 years ago
- ☆52Updated 3 years ago
- ☆44Updated last year
- ☆38Updated 6 months ago
- iGniter, an interference-aware GPU resource provisioning framework for achieving predictable performance of DNN inference in the cloud.☆39Updated last year
- Source code of IPA, https://escholarship.org/uc/item/2p0805dq☆12Updated last year
- Metis: Learning to Schedule Long-Running Applications in Shared Container Clusters with at Scale☆19Updated 5 years ago
- ☆23Updated 4 years ago
- ☆11Updated 5 years ago
- Primo: Practical Learning-Augmented Systems with Interpretable Models☆19Updated 2 years ago
- ☆20Updated 3 years ago
- ☆24Updated 2 years ago
- Artifacts for our ASPLOS'23 paper ElasticFlow☆55Updated last year
- Artifacts for our SIGCOMM'22 paper Muri☆43Updated 2 years ago
- ☆102Updated last year
- A Deep Learning Cluster Scheduler☆37Updated 5 years ago
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆36Updated 4 months ago
- Model-less Inference Serving☆92Updated 2 years ago
- A Portable C Library for Distributed CNN Inference on IoT Edge Clusters☆88Updated 5 years ago