shahrukhx01 / multitask-learning-transformersLinks
A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.
☆97Updated 3 years ago
Alternatives and similar repositories for multitask-learning-transformers
Users that are interested in multitask-learning-transformers are comparing it to the libraries listed below
Sorting:
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch …☆82Updated 2 years ago
- PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.☆63Updated 3 years ago
- A repo to explore different NLP tasks which can be solved using T5☆172Updated 4 years ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆203Updated 2 years ago
- ☆45Updated 2 years ago
- [DEPRECATED] Adapt Transformer-based language models to new text domains☆87Updated last year
- A Framework for Textual Entailment based Zero Shot text classification☆152Updated last year
- Efficient Attention for Long Sequence Processing☆95Updated last year
- Few-Shot-Intent-Detection includes popular challenging intent detection datasets with/without OOS queries and state-of-the-art baselines …☆144Updated last year
- Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021☆35Updated 3 years ago
- Code accompanying EMNLP 2020 paper "Cold-start Active Learning through Self-supervised Language Modeling".☆40Updated 4 years ago
- ☆59Updated 2 years ago
- Source code for paper "Learning from Noisy Labels for Entity-Centric Information Extraction", EMNLP 2021☆55Updated 3 years ago
- Collection of NLP model explanations and accompanying analysis tools☆144Updated 2 years ago
- A Natural Language Inference (NLI) model based on Transformers (BERT and ALBERT)☆132Updated last year
- Multilingual abstractive summarization dataset extracted from WikiHow.☆92Updated 4 months ago
- code for the paper "Zero-Shot Text Classification with Self-Training" for EMNLP 2022☆50Updated 2 months ago
- https://arxiv.org/pdf/1909.04054☆79Updated 2 years ago
- ☆42Updated 3 years ago
- Define Transformers, T5 model and RoBERTa Encoder decoder model for product names generation☆48Updated 3 years ago
- Detect toxic spans in toxic texts☆69Updated 2 years ago
- Enhancing the BERT training with Semi-supervised Generative Adversarial Networks in Pytorch/HuggingFace☆96Updated 3 years ago
- Main repository for "CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters"☆202Updated last year
- ☆33Updated 2 years ago
- ☆47Updated 2 years ago
- Some notebooks for NLP☆205Updated last year
- Research framework for low resource text classification that allows the user to experiment with classification models and active learning…☆102Updated 3 years ago
- Implementation of Self-adjusting Dice Loss from "Dice Loss for Data-imbalanced NLP Tasks" paper☆108Updated 4 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆133Updated 2 years ago
- A simple project training 3 separate NLP tasks simultaneously using Multitask-Learning☆23Updated 2 years ago