eml-eda / pit
Lightweight Neural Architecture Search for Temporal Convolutional Networks at the Edge
☆10Updated 2 years ago
Alternatives and similar repositories for pit:
Users that are interested in pit are comparing it to the libraries listed below
- ☆25Updated 3 years ago
- [CVPR 2024] Offical implementation for A&B BNN: Add&Bit-Operation-Only Hardware-Friendly Binary Neural Network☆23Updated 4 months ago
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆53Updated 2 years ago
- μNAS is a neural architecture search (NAS) system that designs small-yet-powerful microcontroller-compatible neural networks.☆81Updated 4 years ago
- A Plug-and-play Lightweight tool for the Inference Optimization of Deep Neural networks☆41Updated last week
- Code for High-Capacity Expert Binary Networks (ICLR 2021).☆27Updated 3 years ago
- [ICLR 2021 Spotlight] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yinin…☆30Updated last year
- Reproducing Quantization paper PACT☆63Updated 2 years ago
- Training Quantized Neural Networks with a Full-precision Auxiliary Module☆13Updated 4 years ago
- Position-based Scaled Gradient for Model Quantization and Pruning Code (NeurIPS 2020)☆26Updated 4 years ago
- Neural Network Acceleration such as ASIC, FPGA, GPU, and PIM☆51Updated 5 years ago
- ☆19Updated 4 years ago
- The code for Joint Neural Architecture Search and Quantization☆13Updated 6 years ago
- Recent Advances on Efficient Vision Transformers☆50Updated 2 years ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆15Updated 3 years ago
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆16Updated 2 years ago
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆25Updated last year
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆30Updated 8 months ago
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆26Updated 2 years ago
- ☆39Updated 2 years ago
- How Do Adam and Training Strategies Help BNNs Optimization? In ICML 2021.☆59Updated 3 years ago
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆63Updated 8 months ago
- [TMLR] Official PyTorch implementation of paper "Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precisio…☆44Updated 6 months ago
- [CVPRW 21] "BNN - BN = ? Training Binary Neural Networks without Batch Normalization", Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu…☆57Updated 3 years ago
- [ACL'22] Training-free Neural Architecture Search for RNNs and Transformers☆14Updated 11 months ago
- [NeurIPS 2020] ShiftAddNet: A Hardware-Inspired Deep Network☆71Updated 4 years ago
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆13Updated 2 months ago
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆29Updated 2 years ago
- ☆43Updated last year
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆32Updated last year