robeld / ERNIELinks
Open Source Neural Machine Translation in PyTorch
☆17Updated 6 years ago
Alternatives and similar repositories for ERNIE
Users that are interested in ERNIE are comparing it to the libraries listed below
Sorting:
- [ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing☆336Updated last year
- Implementation of a Quantized Transformer Model☆19Updated 6 years ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆259Updated 2 years ago
- DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference☆160Updated 3 years ago
- Prune a model while finetuning or training.☆405Updated 3 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆197Updated 2 years ago
- Compression of NMT transformer model with tensor methods☆47Updated 6 years ago
- Block Sparse movement pruning☆81Updated 4 years ago
- MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices☆70Updated 5 years ago
- [KDD'22] Learned Token Pruning for Transformers☆100Updated 2 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆173Updated 5 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated last month
- ☆206Updated 3 years ago
- Root Mean Square Layer Normalization☆256Updated 2 years ago
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆610Updated last year
- Transformers without Tears: Improving the Normalization of Self-Attention☆133Updated last year
- Running BERT without Padding☆475Updated 3 years ago
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆109Updated 7 years ago
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆315Updated 2 years ago
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆203Updated 6 years ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆192Updated 2 years ago
- Implementation of NeurIPS 2019 paper "Normalization Helps Training of Quantized LSTM"☆31Updated last year
- ☆254Updated 3 years ago
- A library for researching neural networks compression and acceleration methods.☆139Updated 2 months ago
- DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)☆50Updated 2 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Updated 5 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆252Updated 3 years ago
- Efficient, check-pointed data loading for deep learning with massive data sets.☆209Updated 2 years ago
- In this repository, we explore model compression for transformer architectures via quantization. We specifically explore quantization awa…☆24Updated 4 years ago
- Code associated with the paper **SkipBERT: Efficient Inference with Shallow Layer Skipping**, at ACL 2022☆16Updated 3 years ago