gonglinyuan / StackingBERTLinks
Source code for "Efficient Training of BERT by Progressively Stacking"
☆113Updated 6 years ago
Alternatives and similar repositories for StackingBERT
Users that are interested in StackingBERT are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of Transformer-based Neural Machine Translation☆78Updated 2 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆173Updated 5 years ago
- A dual learning toolkit developed by Microsoft Research☆72Updated 2 years ago
- ☆119Updated 6 years ago
- Codes for "Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View"☆148Updated 6 years ago
- Non-autoregressive Neural Machine Translation (not a full version)☆70Updated 2 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- souce code for "Accelerating Neural Transformer via an Average Attention Network"☆78Updated 6 years ago
- This repo is not maintained. For latest version, please visit https://github.com/ictnlp. A collection of transformer's guides, implementa…☆44Updated 6 years ago
- [EMNLP 2018] On Tree-Based Neural Sentence Modeling.☆64Updated 6 years ago
- Implementation of Densely Connected Attention Propagation for Reading Comprehension (NIPS 2018)☆69Updated 6 years ago
- ☆31Updated 6 years ago
- Source code to reproduce the results in the ACL 2019 paper "Syntactically Supervised Transformers for Faster Neural Machine Translation"☆81Updated 3 years ago
- Unsupervised neural machine translation; weight sharing; GAN☆94Updated 7 years ago
- Facebook AI Research Sequence-to-Sequence Toolkit written in Python.☆22Updated 2 years ago
- Text Content Manipulation☆45Updated 5 years ago
- Code to reproduce results in our ACL 2018 paper "Did the Model Understand the Question?"☆33Updated 7 years ago
- Code from Jia and Liang, "Adversarial Examples for Evaluating Reading Comprehension Systems" (EMNLP 2017)☆118Updated 7 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆86Updated 2 years ago
- Non-Monotonic Sequential Text Generation (ICML 2019)☆72Updated 6 years ago
- Code for "A Multi-Task Approach for Disentangling Syntax and Semantics in Sentence Representations" (NAACL 2019)☆67Updated 4 years ago
- Neutron: A pytorch based implementation of Transformer and its variants.☆64Updated 2 years ago
- PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset☆123Updated 6 years ago
- [ACL'19] Code for "Semi-supervised Domain Adaptation for Dependency Parsing"☆15Updated 6 years ago
- Distilling BERT using natural language generation.☆38Updated 2 years ago
- Source code for ``Straight to the Tree: Constituency Parsing with Neural Syntactic Distance'' published at ACL 2018☆63Updated 7 years ago
- Re-implement "QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension"☆120Updated 7 years ago
- A Toolkit for Training, Tracking, Saving Models and Syncing Results☆62Updated 5 years ago
- Re-implementation of Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling (T. Shen et al., ICLR 2018) on P…☆42Updated 7 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated last month