Zhen-Dong / CoDeNetLinks
[FPGA'21] CoDeNet is an efficient object detection model on PyTorch, with SOTA performance on VOC and COCO based on CenterNet and Co-Designed deformable convolution.
☆27Updated 2 years ago
Alternatives and similar repositories for CoDeNet
Users that are interested in CoDeNet are comparing it to the libraries listed below
Sorting:
- ☆19Updated 4 years ago
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆17Updated 3 years ago
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆25Updated 2 years ago
- ☆28Updated 3 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆15Updated 3 years ago
- BitSplit Post-trining Quantization☆50Updated 3 years ago
- This is an implementation of YOLO using LSQ network quantization method.☆22Updated 3 years ago
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆13Updated 8 months ago
- ☆23Updated 4 years ago
- TQT's pytorch implementation.☆21Updated 3 years ago
- Training with Block Minifloat number representation☆16Updated 4 years ago
- The code for Joint Neural Architecture Search and Quantization☆13Updated 6 years ago
- [NeurIPS 2020] ShiftAddNet: A Hardware-Inspired Deep Network☆73Updated 4 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆41Updated 4 years ago
- Static Block Floating Point Quantization for CNN☆36Updated 4 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆94Updated 3 years ago
- ☆71Updated 5 years ago
- My name is Fang Biao. I'm currently pursuing my Master degree with the college of Computer Science and Engineering, Si Chuan University, …☆53Updated 2 years ago
- A FPGA-based neural network inference accelerator, which won the third place in DAC-SDC☆28Updated 3 years ago
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆52Updated last year
- Approximate layers - TensorFlow extension☆26Updated 6 months ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆58Updated 2 years ago
- DAC System Design Contest 2020☆29Updated 5 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated 2 years ago
- Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming☆98Updated 4 years ago
- Neural Network Quantization With Fractional Bit-widths☆11Updated 4 years ago
- ☆35Updated 6 years ago
- [CVPRW 2021] Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling on Heterogeneous Embedded Platforms☆30Updated 2 years ago
- Fast NPU-aware Neural Architecture Search☆22Updated 4 years ago