laiguokun / bert-clothLinks
☆39Updated 5 years ago
Alternatives and similar repositories for bert-cloth
Users that are interested in bert-cloth are comparing it to the libraries listed below
Sorting:
- DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference☆159Updated 3 years ago
- Danqi Chen's PhD Thesis☆224Updated 5 years ago
- ☆79Updated 2 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"☆128Updated 4 years ago
- Pretrain CPM-1☆53Updated 4 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆252Updated 3 years ago
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆315Updated 2 years ago
- NLP Course Material & QA☆174Updated 3 years ago
- A plug-in of Microsoft DeepSpeed to fix the bug of DeepSpeed pipeline☆25Updated 4 years ago
- Re-implementation of BiDAF(Bidirectional Attention Flow for Machine Comprehension, Minjoon Seo et al., ICLR 2017) on PyTorch.☆245Updated last year
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆118Updated 4 years ago
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆203Updated 6 years ago
- Paper Lists, Notes and Slides, Focus on NLP. For summarization, please refer to https://github.com/xcfcode/Summarization-Papers☆165Updated 3 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 4 years ago
- Implementation of papers for text classification task on SST-1/SST-2☆66Updated last year
- A list of recent papers about Meta / few-shot learning methods applied in NLP areas.☆231Updated 4 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆172Updated 5 years ago
- Starter code for Stanford CS224n default final project on SQuAD 2.0☆188Updated 5 years ago
- Differentiable Product Quantization for End-to-End Embedding Compression.☆63Updated 2 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆130Updated 4 years ago
- 香侬科技(北京香侬慧语科技有限责任公司)知乎爆料备份☆42Updated 5 years ago
- ☆232Updated 5 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆105Updated 2 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆185Updated 2 years ago
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆109Updated 6 years ago
- This is the official code repository for NumNet+(https://leaderboard.allenai.org/drop/submission/blu418v76glsbnh1qvd0)☆177Updated last year
- Visualization for simple attention and Google's multi-head attention.☆68Updated 7 years ago
- ☆255Updated 2 years ago
- Deep learning images developed from nvidia/cuda-cudnn-devel-ubuntu.☆23Updated 3 years ago
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆227Updated 4 years ago