shahrukhx01 / multitask-learning-transformers
A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.
☆95Updated 2 years ago
Alternatives and similar repositories for multitask-learning-transformers:
Users that are interested in multitask-learning-transformers are comparing it to the libraries listed below
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch …☆80Updated 2 years ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆202Updated 2 years ago
- ☆44Updated 2 years ago
- Define Transformers, T5 model and RoBERTa Encoder decoder model for product names generation☆48Updated 3 years ago
- code for the paper "Zero-Shot Text Classification with Self-Training" for EMNLP 2022☆49Updated 2 years ago
- PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.☆61Updated 2 years ago
- ☆41Updated 3 years ago
- A repo to explore different NLP tasks which can be solved using T5☆172Updated 4 years ago
- ☆58Updated 2 years ago
- Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021☆34Updated 3 years ago
- Source code for paper "Learning from Noisy Labels for Entity-Centric Information Extraction", EMNLP 2021☆55Updated 3 years ago
- [DEPRECATED] Adapt Transformer-based language models to new text domains☆87Updated last year
- Uncertainty-aware Self-training☆121Updated last year
- Code accompanying EMNLP 2020 paper "Cold-start Active Learning through Self-supervised Language Modeling".☆40Updated 3 years ago
- Efficient Attention for Long Sequence Processing☆93Updated last year
- Implementation of Self-adjusting Dice Loss from "Dice Loss for Data-imbalanced NLP Tasks" paper☆107Updated 4 years ago
- A curated list of few-shot learning in NLP. :-)☆64Updated 3 years ago
- A Natural Language Inference (NLI) model based on Transformers (BERT and ALBERT)☆132Updated last year
- A simple project training 3 separate NLP tasks simultaneously using Multitask-Learning☆23Updated last year
- Official PyTorch Implementation of SSMix (Findings of ACL 2021)☆62Updated 3 years ago
- On Explaining Your Explanations of BERT: An Empirical Study with Sequence Classification☆30Updated 2 years ago
- Models for automatically transforming toxic text to neutral☆34Updated last year
- A library to conduct ranking experiments with transformers.☆161Updated last year
- 👾 A library of state-of-the-art pretrained models for Natural Language Processing (NLP)☆23Updated 5 years ago
- A Self-Supervised Contrastive Learning Framework for Aspect Detection☆35Updated 3 years ago
- Multi-task modelling extensions for huggingface transformers☆20Updated 2 years ago
- State of the art Semantic Sentence Embeddings☆99Updated 2 years ago
- Data and code for our paper "Exploring and Predicting Transferability across NLP Tasks", to appear at EMNLP 2020.☆49Updated 4 years ago
- https://arxiv.org/pdf/1909.04054☆78Updated 2 years ago
- Weakly-supervised Text Classification Based on Keyword Graph☆22Updated 2 years ago