robeld / ERNIELinks
Open Source Neural Machine Translation in PyTorch
☆17Updated 6 years ago
Alternatives and similar repositories for ERNIE
Users that are interested in ERNIE are comparing it to the libraries listed below
Sorting:
- [ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing☆335Updated last year
- Implementation of a Quantized Transformer Model☆19Updated 6 years ago
- DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference☆160Updated 3 years ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆258Updated 2 years ago
- [KDD'22] Learned Token Pruning for Transformers☆100Updated 2 years ago
- MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices☆69Updated 5 years ago
- Block Sparse movement pruning☆81Updated 4 years ago
- ☆205Updated 3 years ago
- Compression of NMT transformer model with tensor methods☆47Updated 6 years ago
- Prune a model while finetuning or training.☆405Updated 3 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆197Updated 2 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated 3 weeks ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆192Updated 2 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆172Updated 5 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆133Updated last year
- Running BERT without Padding☆475Updated 3 years ago
- [ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)☆22Updated 2 years ago
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆109Updated 6 years ago
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆203Updated 6 years ago
- A Fast Muti-processing BERT-Inference System☆101Updated 2 years ago
- Research and development for optimizing transformers☆131Updated 4 years ago
- A library for researching neural networks compression and acceleration methods.☆139Updated last month
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Updated 4 years ago
- This project is the official implementation of our accepted ICLR 2022 paper BiBERT: Accurate Fully Binarized BERT.☆88Updated 2 years ago
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 4 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 4 years ago
- ☆15Updated 4 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆610Updated last year
- Root Mean Square Layer Normalization☆254Updated 2 years ago
- ☆17Updated 5 years ago