nlpyang / pytorch-transformersLinks
πΎ A library of state-of-the-art pretrained models for Natural Language Processing (NLP)
β24Updated 5 years ago
Alternatives and similar repositories for pytorch-transformers
Users that are interested in pytorch-transformers are comparing it to the libraries listed below
Sorting:
- A curated list of few-shot learning in NLP. :-)β64Updated 3 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paperβ132Updated 2 years ago
- [EMNLP 2021] Text AutoAugment: Learning Compositional Augmentation Policy for Text Classificationβ128Updated 2 years ago
- Uncertainty-aware Self-trainingβ121Updated last year
- A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaβ¦β97Updated 2 years ago
- Code for the paper "True Few-Shot Learning in Language Models" (https://arxiv.org/abs/2105.11447)β144Updated 3 years ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Selfβ¦β203Updated 2 years ago
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch β¦β82Updated 2 years ago
- Meta learning with BERT as a learnerβ107Updated last year
- β117Updated 3 years ago
- This shows how to fine-tune Bert language model and use PyTorch-transformers for text classififcationβ71Updated 5 years ago
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).β184Updated 2 years ago
- β292Updated 2 years ago
- A Light and Modular PyTorch NLP Project Templateβ59Updated 4 years ago
- [EMNLP 2021] Improving and Simplifying Pattern Exploiting Trainingβ154Updated 3 years ago
- A repo to explore different NLP tasks which can be solved using T5β172Updated 4 years ago
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240β168Updated 2 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretrainingβ117Updated last year
- Enhancing the BERT training with Semi-supervised Generative Adversarial Networksβ229Updated 2 years ago
- Research code for ACL 2020 paper: "Distilling Knowledge Learned in BERT for Text Generation".β131Updated 3 years ago
- Code associated with the Don't Stop Pretraining ACL 2020 paperβ532Updated 3 years ago
- PyTorch β SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.β62Updated 3 years ago
- Named Entity Recognition with Small Strongly Labeled and Large Weakly Labeled Dataβ100Updated last year
- Few-Shot-Intent-Detection includes popular challenging intent detection datasets with/without OOS queries and state-of-the-art baselines β¦β143Updated last year
- Meta-learning for NLPβ50Updated 4 years ago
- BOND: BERT-Assisted Open-Domain Name Entity Recognition with Distant Supervisionβ290Updated 4 years ago
- β78Updated 2 years ago
- CharBERT: Character-aware Pre-trained Language Model (COLING2020)β121Updated 4 years ago
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"β107Updated 6 years ago
- X-Transformer: Taming Pretrained Transformers for eXtreme Multi-label Text Classificationβ138Updated 4 years ago