laiguokun / bert-clothLinks
☆39Updated 5 years ago
Alternatives and similar repositories for bert-cloth
Users that are interested in bert-cloth are comparing it to the libraries listed below
Sorting:
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- ☆78Updated 2 years ago
- ☆50Updated 2 years ago
- ☆69Updated 4 years ago
- ☆83Updated 5 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆117Updated 4 years ago
- Differentiable Product Quantization for End-to-End Embedding Compression.☆62Updated 2 years ago
- DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference☆156Updated 3 years ago
- Danqi Chen's PhD Thesis☆223Updated 5 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 3 years ago
- The source code for ACL 2020 paper Exclusive Hierarchical Decoding for Deep Keyphrase Generation☆55Updated 2 years ago
- LiveBot: Generating Live Video Comments Based on Visual and Textual Contexts (AAAI 2019)☆122Updated 6 years ago
- Source code for "Efficient Training of BERT by Progressively Stacking"☆112Updated 5 years ago
- Codes for "Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View"☆148Updated 6 years ago
- Notes of my introduction about NLP in Fudan University☆37Updated 3 years ago
- Non-Monotonic Sequential Text Generation (ICML 2019)☆72Updated 6 years ago
- The code of Encoding Word Order in Complex-valued Embedding☆42Updated 5 years ago
- This is the official code repository for NumNet+(https://leaderboard.allenai.org/drop/submission/blu418v76glsbnh1qvd0)☆177Updated 11 months ago
- Pretrain CPM-1☆51Updated 4 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆104Updated 2 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆251Updated 3 years ago
- Code for ACL 2019 paper: "Searching for Effective Neural Extractive Summarization: What Works and What's Next"☆90Updated 4 years ago
- BiLSTM-CRF model for NER☆15Updated 6 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated 2 years ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 4 years ago
- Conversational Toolkit. An Open-Source Toolkit for Fast Development and Fair Evaluation of Text Generation☆127Updated 4 years ago
- Codes for our paper at EMNLP2019☆36Updated 5 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆132Updated 4 years ago
- Source code for the EMNLP 2020 paper "Cold-Start and Interpretability: Turning Regular Expressions intoTrainable Recurrent Neural Network…☆115Updated 3 years ago
- ☆75Updated 2 years ago