MAC-AutoML / YOCO-BERT
The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient.
☆48Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for YOCO-BERT
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆32Updated last year
- ☆55Updated 3 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆72Updated 3 years ago
- Code for SelfAugment☆27Updated 3 years ago
- Block Sparse movement pruning☆78Updated 3 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆44Updated 2 years ago
- Code for paper "Continual and Multi-Task Architecture Search (ACL 2019)"☆41Updated 5 years ago
- [ICLR 2021] "UMEC: Unified Model and Embedding Compression for Efficient Recommendation Systems" by Jiayi Shen, Haotao Wang*, Shupeng Gui…☆39Updated 2 years ago
- NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021☆37Updated 3 years ago
- MixPath: A Unified Approach for One-shot Neural Architecture Search☆28Updated 4 years ago
- Codes for DATA: Differentiable ArchiTecture Approximation.☆11Updated 3 years ago
- [ACL-IJCNLP 2021] "EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets" by Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, …☆17Updated 2 years ago
- Code release for "Self-Tuning for Data-Efficient Deep Learning" (ICML 2021)☆110Updated 3 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- Knowledge Distillation Algorithms implemented with PyTorch☆17Updated 5 years ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆48Updated 11 months ago
- ☆51Updated 3 years ago
- Role-Wise Data Augmentation for Knowledge Distillation☆18Updated last year
- [ICLR 2022] "As-ViT: Auto-scaling Vision Transformers without Training" by Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wa…☆76Updated 2 years ago
- ☆10Updated 3 years ago
- Automated neural architecture search algorithms implemented in PyTorch and Autogluon toolkit.☆12Updated 4 years ago
- Code for ViTAS_Vision Transformer Architecture Search☆51Updated 3 years ago
- The implementation of multi-branch attentive Transformer (MAT).☆33Updated 4 years ago
- Code for paper 'Minimizing FLOPs to Learn Efficient Sparse Representations' published at ICLR 2020☆21Updated 4 years ago
- PyTorch, PyTorch Lightning framework for trying knowledge distillation in image classification problems☆31Updated 3 months ago
- code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"☆103Updated 3 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆119Updated 3 years ago
- ☆42Updated 4 years ago