eis-lab / sage
Experimental deep learning framework written in Rust
☆14Updated 2 years ago
Alternatives and similar repositories for sage:
Users that are interested in sage are comparing it to the libraries listed below
- MobiSys#114☆21Updated last year
- Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access (ACM EuroSys '23)☆55Updated 10 months ago
- ☆24Updated last year
- ☆18Updated 2 years ago
- ☆13Updated 3 years ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆20Updated 4 years ago
- SOTA Learning-augmented Systems☆34Updated 2 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆80Updated 3 weeks ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆18Updated 8 months ago
- This is a list of awesome edgeAI inference related papers.☆91Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆37Updated 2 years ago
- ☆48Updated 9 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆93Updated last month
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆11Updated last year
- The code for paper: Neuralpower: Predict and deploy energy-efficient convolutional neural networks☆21Updated 5 years ago
- Multi-Instance-GPU profiling tool☆56Updated last year
- one-shot-tuner☆8Updated 2 years ago
- zTT: Learning-based DVFS with Zero Thermal Throttling for Mobile Devices [MobiSys'21] - Artifact Evaluation☆22Updated 3 years ago
- The open-source project for "Mandheling: Mixed-Precision On-Device DNN Training with DSP Offloading"[MobiCom'2022]☆18Updated 2 years ago
- A version of XRBench-MAESTRO used for MLSys 2023 publication☆23Updated last year
- A curated list of early exiting (LLM, CV, NLP, etc)☆38Updated 5 months ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆24Updated 2 years ago
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆39Updated 10 months ago
- ☆37Updated 3 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆55Updated 3 years ago
- Code for paper "ElasticTrainer: Speeding Up On-Device Training with Runtime Elastic Tensor Selection" (MobiSys'23)☆12Updated last year
- FastFlow is a system that automatically detects CPU bottlenecks in deep learning training pipelines and resolves the bottlenecks with dat…☆26Updated last year
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆19Updated 11 months ago
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆60Updated 5 months ago