eth-easl / cachewLinks
ML Input Data Processing as a Service. This repository contains the source code for Cachew (built on top of TensorFlow).
☆39Updated 10 months ago
Alternatives and similar repositories for cachew
Users that are interested in cachew are comparing it to the libraries listed below
Sorting:
- ☆24Updated 2 years ago
- ☆38Updated 4 years ago
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆35Updated 2 years ago
- ☆55Updated 4 years ago
- Stateful LLM Serving☆79Updated 4 months ago
- A resilient distributed training framework☆95Updated last year
- Lightning In-Memory Object Store☆47Updated 3 years ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆25Updated 8 months ago
- ☆45Updated 3 years ago
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆28Updated 2 months ago
- A universal workflow system for exactly-once DAGs☆23Updated 2 years ago
- Vector search with bounded performance.☆36Updated last year
- ☆15Updated 2 years ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆125Updated last year
- ☆13Updated last year
- Dorylus: Affordable, Scalable, and Accurate GNN Training☆76Updated 4 years ago
- An experimental parallel training platform☆54Updated last year
- ☆15Updated 3 years ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆50Updated 2 years ago
- Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction | A tiny BERT model can tell you the verbosity of an …☆38Updated last year
- Microsoft Collective Communication Library☆63Updated 8 months ago
- FTPipe and related pipeline model parallelism research.☆41Updated 2 years ago
- SFS: A Smart OS Scheduler for Serverless Function Workloads (SC'22)☆13Updated 2 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆83Updated 2 years ago
- Model-less Inference Serving☆90Updated last year
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆53Updated 11 months ago
- Nightcore: Efficient and Scalable Serverless Computing for Latency-Sensitive, Interactive Microservices [ASPLOS '21]☆105Updated 3 years ago
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆44Updated 2 years ago
- ☆13Updated 2 years ago
- GeminiFS: A Companion File System for GPUs☆37Updated 5 months ago