Zhen-Dong / CoDeNet
[FPGA'21] CoDeNet is an efficient object detection model on PyTorch, with SOTA performance on VOC and COCO based on CenterNet and Co-Designed deformable convolution.
☆25Updated 2 years ago
Alternatives and similar repositories for CoDeNet:
Users that are interested in CoDeNet are comparing it to the libraries listed below
- ☆19Updated 4 years ago
- ☆32Updated 4 years ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆15Updated 3 years ago
- ☆70Updated 5 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated last year
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆26Updated 2 years ago
- ☆23Updated 3 years ago
- ☆28Updated 3 years ago
- PyTorch implementation of DiracDeltaNet from paper Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs☆31Updated 5 years ago
- ☆1Updated 4 years ago
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆16Updated 2 years ago
- This is an implementation of YOLO using LSQ network quantization method.☆23Updated 2 years ago
- A FPGA-based neural network inference accelerator, which won the third place in DAC-SDC☆28Updated 2 years ago
- DAC System Design Contest 2020☆29Updated 4 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆51Updated 2 years ago
- Approximate layers - TensorFlow extension☆27Updated 11 months ago
- ☆26Updated 3 months ago
- Neural Network Quantization With Fractional Bit-widths☆12Updated 4 years ago
- Provides the code for the paper "EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators" by Luk…☆17Updated 5 years ago
- ☆18Updated 3 years ago
- The code for Joint Neural Architecture Search and Quantization☆13Updated 5 years ago
- Designs for finalist teams of the DAC System Design Contest☆36Updated 4 years ago
- ☆19Updated 2 years ago
- FlexASR: A Reconfigurable Hardware Accelerator for Attention-based Seq-to-Seq Networks☆42Updated 3 weeks ago
- A DAG processor and compiler for a tree-based spatial datapath.☆13Updated 2 years ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆83Updated 6 months ago
- Algorithm-hardware Co-design for Deformable Convolution☆24Updated 4 years ago
- BitSplit Post-trining Quantization☆49Updated 3 years ago