eis-lab / sage
Experimental deep learning framework written in Rust
☆14Updated 2 years ago
Alternatives and similar repositories for sage:
Users that are interested in sage are comparing it to the libraries listed below
- ☆24Updated last year
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆21Updated 4 years ago
- MobiSys#114☆21Updated last year
- ☆50Updated 10 months ago
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆24Updated 3 years ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆95Updated last month
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆83Updated last month
- SOTA Learning-augmented Systems☆34Updated 2 years ago
- Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access (ACM EuroSys '23)☆55Updated 10 months ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆60Updated 6 months ago
- Official Repo for "LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization"☆28Updated 11 months ago
- The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]☆18Updated 2 years ago
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆11Updated last year
- A curated list of early exiting (LLM, CV, NLP, etc)☆41Updated 5 months ago
- Multi-Instance-GPU profiling tool☆56Updated last year
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆40Updated 11 months ago
- ☆18Updated 11 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆37Updated 2 years ago
- A version of XRBench-MAESTRO used for MLSys 2023 publication☆23Updated last year
- ☆27Updated 10 months ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆55Updated 3 years ago
- ☆18Updated 2 years ago
- ☆99Updated last year
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆81Updated last year
- [ASPLOS'23] Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression☆6Updated 6 months ago
- ☆103Updated last year
- one-shot-tuner☆8Updated 2 years ago
- This is a list of awesome edgeAI inference related papers.☆92Updated last year
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆19Updated 9 months ago