pittisl / ElasticTrainerLinks
Code for paper "ElasticTrainer: Speeding Up On-Device Training with Runtime Elastic Tensor Selection" (MobiSys'23)
☆13Updated last year
Alternatives and similar repositories for ElasticTrainer
Users that are interested in ElasticTrainer are comparing it to the libraries listed below
Sorting:
- To deploy Transformer models in CV to mobile devices.☆18Updated 3 years ago
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆56Updated 2 years ago
- This is a list of awesome edgeAI inference related papers.☆97Updated last year
- Experimental deep learning framework written in Rust☆15Updated 2 years ago
- KVTuner: Sensitivity-Aware Layer-wise Mixed Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference☆18Updated 2 months ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆56Updated last year
- ☆100Updated last year
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆63Updated last year
- ☆25Updated 3 years ago
- ☆33Updated last year
- MobiSys#114☆21Updated last year
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆26Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Updated last year
- ☆12Updated last year
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆33Updated 2 years ago
- [ACL'22] Training-free Neural Architecture Search for RNNs and Transformers☆14Updated last year
- Flexible simulator for mixed precision and format simulation of LLMs and vision transformers.☆51Updated 2 years ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆29Updated last year
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆75Updated 2 years ago
- ☆59Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆111Updated last month
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆33Updated 11 months ago
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆29Updated 2 years ago
- ☆11Updated last year
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆72Updated 3 years ago
- The official implementation of TinyTrain [ICML '24]☆22Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆64Updated 4 months ago
- [CVPRW 2021] Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling on Heterogeneous Embedded Platforms☆29Updated 2 years ago
- Quantized Attention on GPU☆44Updated 8 months ago