eis-lab / sage
Experimental deep learning framework written in Rust
☆14Updated 2 years ago
Alternatives and similar repositories for sage:
Users that are interested in sage are comparing it to the libraries listed below
- MobiSys#114☆21Updated last year
- ☆24Updated last year
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆25Updated 3 years ago
- SOTA Learning-augmented Systems☆36Updated 2 years ago
- [ACM EuroSys '23] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆56Updated last year
- Multi-Instance-GPU profiling tool☆57Updated 2 years ago
- ☆66Updated last month
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- ☆14Updated 3 years ago
- ☆19Updated 2 years ago
- This is a list of awesome edgeAI inference related papers.☆96Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆106Updated 3 weeks ago
- ☆14Updated 3 years ago
- The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]☆19Updated 2 years ago
- ☆99Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆63Updated 9 months ago
- Official Repo for "LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization"☆29Updated last year
- [MobiSys 2020] Fast and Scalable In-memory Deep Multitask Learning via Neural Weight Virtualization☆16Updated 4 years ago
- LLM serving cluster simulator☆99Updated last year
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆44Updated last year
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆112Updated 2 months ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 3 years ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆52Updated 8 months ago
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆11Updated last year
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆19Updated last year
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆85Updated last year
- ☆53Updated 4 years ago
- one-shot-tuner☆8Updated 2 years ago