learning1234embed / NeuralWeightVirtualization
[MobiSys 2020] Fast and Scalable In-memory Deep Multitask Learning via Neural Weight Virtualization
☆15Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for NeuralWeightVirtualization
- μNAS is a neural architecture search (NAS) system that designs small-yet-powerful microcontroller-compatible neural networks.☆76Updated 3 years ago
- This repository containts the pytorch scripts to train mixed-precision networks for microcontroller deployment, based on the memory contr…☆49Updated 6 months ago
- This is a list of awesome edgeAI inference related papers.☆88Updated 11 months ago
- ☆74Updated last year
- ☆188Updated 10 months ago
- A curated list of early exiting (LLM, CV, NLP, etc)☆29Updated 3 months ago
- MobiSys#114☆21Updated last year
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆36Updated 3 years ago
- Official implementation for paper LIMPQ, "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance", ECCV 2022☆47Updated last year
- Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks☆67Updated 3 years ago
- PyTorch implementation of "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference"☆54Updated 5 years ago
- Measuring and predicting on-device metrics (latency, power, etc.) of machine learning models☆66Updated last year
- ☆20Updated 4 years ago
- PyTorch implementation for the APoT quantization (ICLR 2020)☆268Updated 2 years ago
- Source code for the paper: "A Latency-Predictable Multi-Dimensional Optimization Framework forDNN-driven Autonomous Systems"☆19Updated 3 years ago
- ☆18Updated 2 years ago
- ☆124Updated last year
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆95Updated 3 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆32Updated last year
- CMix-NN: Mixed Low-Precision CNN Library for Memory-Constrained Edge Devices☆39Updated 4 years ago
- Conditional channel- and precision-pruning on neural networks☆72Updated 4 years ago
- Pytorch-based early exit network inspired by branchynet☆29Updated last year
- Experimental deep learning framework written in Rust☆14Updated 2 years ago
- AFP is a hardware-friendly quantization framework for DNNs, which is contributed by Fangxin Liu and Wenbo Zhao.☆11Updated 3 years ago
- Create tiny ML systems for on-device learning.☆20Updated 3 years ago
- Deep Compressive Offloading: Speeding Up Neural Network Inference by Trading Edge Computation for Network Latency☆25Updated 3 years ago
- ☆17Updated 2 years ago
- About DNN compression and acceleration on Edge Devices.☆55Updated 3 years ago
- ☆38Updated last year
- ☆36Updated 5 years ago