CAS-CLab / BlockConv
[TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA
☆16Updated 2 years ago
Alternatives and similar repositories for BlockConv
Users that are interested in BlockConv are comparing it to the libraries listed below
Sorting:
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆13Updated 3 months ago
- ☆28Updated 3 years ago
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆26Updated 2 years ago
- ☆19Updated 4 years ago
- Neural Network Quantization With Fractional Bit-widths☆12Updated 4 years ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆15Updated 3 years ago
- ☆23Updated 3 years ago
- [FPGA'21] CoDeNet is an efficient object detection model on PyTorch, with SOTA performance on VOC and COCO based on CenterNet and Co-Desi…☆25Updated 2 years ago
- TQT's pytorch implementation.☆21Updated 3 years ago
- DeiT implementation for Q-ViT☆24Updated 3 weeks ago
- The code for Joint Neural Architecture Search and Quantization☆13Updated 6 years ago
- Training Quantized Neural Networks with a Full-precision Auxiliary Module☆13Updated 4 years ago
- [ICASSP'20] DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architecture…☆23Updated 2 years ago
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆48Updated last year
- ☆20Updated 3 years ago
- [TMLR] Official PyTorch implementation of paper "Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precisio…☆44Updated 7 months ago
- Provides the code for the paper "EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators" by Luk…☆19Updated 5 years ago
- A FPGA-based neural network inference accelerator, which won the third place in DAC-SDC☆28Updated 3 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆32Updated last year
- ☆22Updated last year
- ☆21Updated 2 years ago
- ☆34Updated 4 years ago
- A DAG processor and compiler for a tree-based spatial datapath.☆13Updated 2 years ago
- ☆10Updated 2 years ago
- ☆26Updated last month
- ☆12Updated 2 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- ☆17Updated 2 years ago
- Static Block Floating Point Quantization for CNN☆32Updated 3 years ago
- [ICML 2022] ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks☆16Updated 2 years ago