eis-lab / sageLinks
Experimental deep learning framework written in Rust
☆15Updated 2 years ago
Alternatives and similar repositories for sage
Users that are interested in sage are comparing it to the libraries listed below
Sorting:
- This is a list of awesome edgeAI inference related papers.☆98Updated last year
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆22Updated 4 years ago
- ☆21Updated 3 years ago
- ☆25Updated 2 years ago
- MobiSys#114☆22Updated 2 years ago
- ☆73Updated 4 months ago
- Multi-Instance-GPU profiling tool☆60Updated 2 years ago
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆11Updated 2 years ago
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated 2 months ago
- SOTA Learning-augmented Systems☆37Updated 3 years ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆115Updated 3 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆41Updated 2 years ago
- This is the implementation for paper: AdaTune: Adaptive Tensor Program CompilationMade Efficient (NeurIPS 2020).☆14Updated 4 years ago
- ☆92Updated 2 years ago
- Code for paper "ElasticTrainer: Speeding Up On-Device Training with Runtime Elastic Tensor Selection" (MobiSys'23)☆13Updated last year
- Study Group of Deep Learning Compiler☆164Updated 2 years ago
- Official Repo for "SplitQuant / LLM-PQ: Resource-Efficient LLM Offline Serving on Heterogeneous GPUs via Phase-Aware Model Partition and …☆34Updated last month
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆63Updated last year
- ☆75Updated last year
- DietCode Code Release☆65Updated 3 years ago
- DISB is a new DNN inference serving benchmark with diverse workloads and models, as well as real-world traces.☆54Updated last year
- ☆100Updated last year
- ☆152Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆59Updated 6 months ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 4 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 5 years ago
- A version of XRBench-MAESTRO used for MLSys 2023 publication☆25Updated 2 years ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆143Updated 2 months ago
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆29Updated 3 years ago
- An external memory allocator example for PyTorch.☆16Updated 2 months ago