eis-lab / sage
Experimental deep learning framework written in Rust
☆14Updated 2 years ago
Alternatives and similar repositories for sage:
Users that are interested in sage are comparing it to the libraries listed below
- MobiSys#114☆21Updated last year
- ☆24Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆24Updated 3 years ago
- [ACM EuroSys '23] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆56Updated last year
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- ☆19Updated 2 years ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆103Updated last month
- SOTA Learning-augmented Systems☆36Updated 2 years ago
- This is a list of awesome edgeAI inference related papers.☆95Updated last year
- ☆64Updated 3 weeks ago
- Multi-Instance-GPU profiling tool☆57Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- ☆53Updated last year
- ☆14Updated 3 years ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆52Updated 7 months ago
- The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]☆18Updated 2 years ago
- A GPU-accelerated DNN inference serving system that supports instant kernel preemption and biased concurrent execution in GPU scheduling.☆42Updated 2 years ago
- LLM serving cluster simulator☆96Updated 11 months ago
- ☆14Updated 8 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆101Updated 3 months ago
- ☆37Updated 3 years ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆19Updated last year
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆49Updated 10 months ago
- A version of XRBench-MAESTRO used for MLSys 2023 publication☆23Updated last year
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆23Updated 4 months ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆10Updated last year
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆41Updated last year
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 3 years ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆24Updated 11 months ago