xudoong / EdgeVisionTransformerLinks
To deploy Transformer models in CV to mobile devices.
☆18Updated 4 years ago
Alternatives and similar repositories for EdgeVisionTransformer
Users that are interested in EdgeVisionTransformer are comparing it to the libraries listed below
Sorting:
- This is a list of awesome edgeAI inference related papers.☆98Updated 2 years ago
- [CVPRW 2021] Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling on Heterogeneous Embedded Platforms☆30Updated 3 years ago
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆364Updated last year
- Code for paper "ElasticTrainer: Speeding Up On-Device Training with Runtime Elastic Tensor Selection" (MobiSys'23)☆14Updated 2 years ago
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Updated 2 years ago
- ☆78Updated 2 years ago
- Manually implemented quantization-aware training☆23Updated 3 years ago
- ☆37Updated 3 years ago
- ☆208Updated 4 years ago
- About DNN compression and acceleration on Edge Devices.☆57Updated 4 years ago
- [MobiCom 24] Efficient and Adaptive DNN inference under changeable memory budgets☆58Updated last year
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆73Updated 3 years ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆56Updated last year
- ☆102Updated 2 years ago
- Adaptive Model Streaming for real-time video inference on edge devices☆41Updated 4 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆93Updated 3 years ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆129Updated 2 years ago
- MobiSys#114☆23Updated 2 years ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated 2 years ago
- You Only Search Once: On Lightweight Differentiable Architecture Search for Resource-Constrained Embedded Platforms☆12Updated 2 years ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆58Updated 3 years ago
- A collection of research papers on efficient training of DNNs☆70Updated 3 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆50Updated 2 years ago
- ☆169Updated 2 years ago
- ☆40Updated last year
- An external memory allocator example for PyTorch.☆16Updated 6 months ago
- ☆11Updated last year
- Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)☆73Updated 4 years ago
- ☆19Updated 3 years ago