Mohamed-Imed-Eddine / Harmonic-NASLinks
Harmonic-NAS: Hardware-Aware Multimodal Neural Architecture Search on Resource-constrained Devices (ACML 2023)
☆16Updated last year
Alternatives and similar repositories for Harmonic-NAS
Users that are interested in Harmonic-NAS are comparing it to the libraries listed below
Sorting:
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆94Updated last year
- DeiT implementation for Q-ViT☆25Updated 5 months ago
- [CVPR 2025] APHQ-ViT: Post-Training Quantization with Average Perturbation Hessian Based Reconstruction for Vision Transformers☆27Updated 6 months ago
- ☆22Updated last year
- Awesome Pruning. ✅ Curated Resources for Neural Network Pruning.☆168Updated last year
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆97Updated 2 years ago
- EQ-Net [ICCV 2023]☆30Updated 2 years ago
- [TMLR] Official PyTorch implementation of paper "Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precisio…☆46Updated last year
- ☆14Updated 3 years ago
- ☆277Updated last year
- Reproducing Quantization paper PACT☆64Updated 3 years ago
- Post-Training Quantization for Vision transformers.☆227Updated 3 years ago
- (ICCV 2023) Official implementation of Rectified Straight Through Estimator (ReSTE).☆29Updated last year
- The official implementation of the ICML 2023 paper OFQ-ViT☆33Updated 2 years ago
- BinaryViT: Pushing Binary Vision Transformers Towards Convolutional Models☆37Updated last year
- ☆53Updated last year
- ☆25Updated last year
- [NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue …☆131Updated 2 years ago
- ☆12Updated 2 years ago
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆117Updated 2 years ago
- ☆17Updated last year
- ☆45Updated last year
- LSQ+ or LSQplus☆74Updated 8 months ago
- [ECCV 2024] AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer☆34Updated 10 months ago
- AFP is a hardware-friendly quantization framework for DNNs, which is contributed by Fangxin Liu and Wenbo Zhao.☆13Updated 3 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆31Updated last year
- Join the High Accuracy Club on ImageNet with A Binary Neural Network Ticket☆70Updated 2 years ago
- ☆31Updated this week
- ☆27Updated 2 years ago
- [ICCAD 2025] Squant☆15Updated 3 months ago