agural / memory-optimal-direct-convolutionsLinks
Code for reproducing work of ICML 2019 paper: Memory-Optimal Direct Convolutions for Maximizing Classification Accuracy in Embedded Applications
☆12Updated 6 years ago
Alternatives and similar repositories for memory-optimal-direct-convolutions
Users that are interested in memory-optimal-direct-convolutions are comparing it to the libraries listed below
Sorting:
- Code for "Fast Sparse ConvNets" CVPR2020 submissions☆12Updated 5 years ago
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 6 years ago
- [ECCV 2022] SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning☆20Updated 3 years ago
- The code for Joint Neural Architecture Search and Quantization☆13Updated 6 years ago
- Artifact for IPDPS'21: DSXplore: Optimizing Convolutional Neural Networks via Sliding-Channel Convolutions.☆13Updated 4 years ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆15Updated 3 years ago
- [ICLR 2021 Spotlight] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yinin…☆31Updated last year
- The code for our paper "Neural Architecture Search as Program Transformation Exploration"☆16Updated 4 years ago
- Official implementation of Neurips 2020 "Sparse Weight Activation Training" paper.☆29Updated 4 years ago
- The code for paper: Neuralpower: Predict and deploy energy-efficient convolutional neural networks☆21Updated 6 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆74Updated 5 years ago
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆51Updated 3 years ago
- Reference implementations of popular Binarized Neural Networks☆109Updated last week
- ☆25Updated 3 years ago
- Train neural networks with joint quantization and pruning on both weights and activations using any pytorch modules☆43Updated 3 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- All about acceleration and compression of Deep Neural Networks☆33Updated 6 years ago
- BiSUNA framework specialized to compile for the Xilinx Alveo U50☆12Updated 4 years ago
- SAMO: Streaming Architecture Mapping Optimisation☆34Updated 2 years ago
- [ICCV 2021] Code release for "Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks"☆32Updated 3 years ago
- [NeurIPS 2019] E2-Train: Training State-of-the-art CNNs with Over 80% Less Energy☆21Updated 5 years ago
- Accelerating CNN's convolution operation on GPUs by using memory-efficient data access patterns.☆14Updated 7 years ago
- [NeurIPS 2024] BLAST: Block Level Adaptive Structured Matrix for Efficient Deep Neural Network Inference☆15Updated last year
- An implementation of a BinaryConnect network for cifar10☆11Updated 6 years ago
- Position-based Scaled Gradient for Model Quantization and Pruning Code (NeurIPS 2020)☆25Updated 4 years ago
- TBNv2: Convolutional Neural Network With Ternary Inputs and Binary Weights☆16Updated 5 years ago
- A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS).☆23Updated 3 years ago
- The collection of training tricks of binarized neural networks.☆72Updated 4 years ago
- ☆52Updated 6 years ago
- This is the implementation for paper: AdaTune: Adaptive Tensor Program CompilationMade Efficient (NeurIPS 2020).☆14Updated 4 years ago