xudoong / EdgeVisionTransformerLinks
To deploy Transformer models in CV to mobile devices.
☆18Updated 3 years ago
Alternatives and similar repositories for EdgeVisionTransformer
Users that are interested in EdgeVisionTransformer are comparing it to the libraries listed below
Sorting:
- This is a list of awesome edgeAI inference related papers.☆96Updated last year
- [CVPRW 2021] Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling on Heterogeneous Embedded Platforms☆29Updated 2 years ago
- Manually implemented quantization-aware training☆21Updated 2 years ago
- ☆31Updated last year
- An external memory allocator example for PyTorch.☆14Updated 3 years ago
- ☆153Updated 2 years ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆47Updated 2 years ago
- Code for ACM MobiCom 2024 paper "FlexNN: Efficient and Adaptive DNN Inference on Memory-Constrained Edge Devices"☆55Updated 5 months ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binar…☆56Updated last year
- ☆36Updated 2 years ago
- ☆206Updated 3 years ago
- A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.☆354Updated 11 months ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆56Updated 2 years ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆55Updated 2 years ago
- ☆100Updated last year
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆95Updated 3 years ago
- ☆77Updated 2 years ago
- You Only Search Once: On Lightweight Differentiable Architecture Search for Resource-Constrained Embedded Platforms☆11Updated 2 years ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆121Updated 2 years ago
- ☆76Updated 2 years ago
- Adaptive Model Streaming for real-time video inference on edge devices☆41Updated 3 years ago
- MobiSys#114☆21Updated last year
- BitSplit Post-trining Quantization☆50Updated 3 years ago
- Official implementation of the EMNLP23 paper: Outlier Suppression+: Accurate quantization of large language models by equivalent and opti…☆46Updated last year
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 7 months ago
- ☆35Updated 2 years ago
- A collection of research papers on efficient training of DNNs☆70Updated 3 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated last week
- NART = NART is not A RunTime, a deep learning inference framework.☆37Updated 2 years ago