robeld / ERNIELinks
Open Source Neural Machine Translation in PyTorch
☆17Updated 6 years ago
Alternatives and similar repositories for ERNIE
Users that are interested in ERNIE are comparing it to the libraries listed below
Sorting:
- [ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing☆336Updated last year
- DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference☆161Updated 3 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆198Updated 2 years ago
- Prune a model while finetuning or training.☆405Updated 3 years ago
- Implementation of a Quantized Transformer Model☆19Updated 6 years ago
- MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices☆71Updated 5 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆174Updated 5 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated 3 months ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆267Updated 2 years ago
- Block Sparse movement pruning☆81Updated 5 years ago
- Compression of NMT transformer model with tensor methods☆48Updated 6 years ago
- [KDD'22] Learned Token Pruning for Transformers☆102Updated 2 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆611Updated last year
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆315Updated 2 years ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆192Updated 2 years ago
- ☆208Updated 4 years ago
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆203Updated 6 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆134Updated last year
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 5 years ago
- [ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)☆22Updated 2 years ago
- ☆254Updated 3 years ago
- Root Mean Square Layer Normalization☆260Updated 2 years ago
- Running BERT without Padding☆476Updated 3 years ago
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆110Updated 7 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 4 years ago
- The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Natu…☆48Updated 4 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Updated 5 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆48Updated 3 years ago
- ☆16Updated 4 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆127Updated 4 years ago