robeld / ERNIELinks
Open Source Neural Machine Translation in PyTorch
☆17Updated 6 years ago
Alternatives and similar repositories for ERNIE
Users that are interested in ERNIE are comparing it to the libraries listed below
Sorting:
- DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference☆157Updated 3 years ago
- Implementation of a Quantized Transformer Model☆19Updated 6 years ago
- Compression of NMT transformer model with tensor methods☆48Updated 6 years ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆255Updated 2 years ago
- [KDD'22] Learned Token Pruning for Transformers☆98Updated 2 years ago
- [ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing☆336Updated last year
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆196Updated 2 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆172Updated 5 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated 3 months ago
- Block Sparse movement pruning☆81Updated 4 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆133Updated last year
- MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices☆69Updated 5 years ago
- Root Mean Square Layer Normalization☆249Updated 2 years ago
- Prune a model while finetuning or training.☆403Updated 3 years ago
- ☆252Updated 2 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆101Updated 4 years ago
- Simple gradient checkpointing for eager mode execution☆46Updated 4 years ago
- Running BERT without Padding☆474Updated 3 years ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆191Updated 2 years ago
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆203Updated 5 years ago
- ☆205Updated 3 years ago
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆314Updated 2 years ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 4 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆612Updated last year
- A library for researching neural networks compression and acceleration methods.☆138Updated 11 months ago
- Efficient, check-pointed data loading for deep learning with massive data sets.☆208Updated 2 years ago
- LAMB Optimizer for Large Batch Training (TensorFlow version)☆120Updated 5 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 3 years ago
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 4 years ago