GATECH-EIC / torchshiftaddLinks
An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.
☆13Updated 5 months ago
Alternatives and similar repositories for torchshiftadd
Users that are interested in torchshiftadd are comparing it to the libraries listed below
Sorting:
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆16Updated 3 years ago
- ☆19Updated 4 years ago
- [ICASSP'20] DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architecture…☆25Updated 2 years ago
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆17Updated 3 years ago
- ☆27Updated 3 months ago
- Neural Network Quantization With Fractional Bit-widths☆12Updated 4 years ago
- A DAG processor and compiler for a tree-based spatial datapath.☆13Updated 2 years ago
- ☆71Updated 5 years ago
- Fast Emulation of Approximate DNN Accelerators in PyTorch☆23Updated last year
- A FPGA-based neural network inference accelerator, which won the third place in DAC-SDC☆28Updated 3 years ago
- [ICML 2022] ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks☆16Updated 3 years ago
- ☆23Updated 3 years ago
- Designs for finalist teams of the DAC System Design Contest☆37Updated 5 years ago
- DAC System Design Contest 2020☆29Updated 5 years ago
- Training with Block Minifloat number representation☆16Updated 4 years ago
- Linux docker for the DNN accelerator exploration infrastructure composed of Accelergy and Timeloop☆54Updated 3 months ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- Adaptive floating-point based numerical format for resilient deep learning☆14Updated 3 years ago
- Torch2Chip (MLSys, 2024)☆53Updated 3 months ago
- A Out-of-box PyTorch Scaffold for Neural Network Quantization-Aware-Training (QAT) Research. Website: https://github.com/zhutmost/neuralz…☆26Updated 2 years ago
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆108Updated 2 years ago
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆48Updated last year
- [DATE 2025] Official implementation and dataset of AIrchitect v2: Learning the Hardware Accelerator Design Space through Unified Represen…☆15Updated 6 months ago
- ☆24Updated 2 years ago
- ☆35Updated 5 years ago
- Approximate layers - TensorFlow extension☆27Updated 3 months ago
- ☆12Updated 11 months ago
- A general framework for optimizing DNN dataflow on systolic array☆39Updated 4 years ago
- Provides the code for the paper "EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators" by Luk…☆19Updated 5 years ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆94Updated 10 months ago