shahrukhx01 / multitask-learning-transformers
A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.
☆91Updated 2 years ago
Alternatives and similar repositories for multitask-learning-transformers:
Users that are interested in multitask-learning-transformers are comparing it to the libraries listed below
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch …☆80Updated 2 years ago
- ☆85Updated 3 years ago
- ☆43Updated 2 years ago
- PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.☆61Updated 2 years ago
- ☆40Updated 3 years ago
- Implementation of Self-adjusting Dice Loss from "Dice Loss for Data-imbalanced NLP Tasks" paper☆107Updated 4 years ago
- ☆33Updated last year
- https://arxiv.org/pdf/1909.04054☆78Updated 2 years ago
- Benchmarking various Deep Learning models such as BERT, ALBERT, BiLSTMs on the task of sentence entailment using two datasets - MultiNLI …☆28Updated 4 years ago
- Efficient Attention for Long Sequence Processing☆91Updated last year
- Few-Shot-Intent-Detection includes popular challenging intent detection datasets with/without OOS queries and state-of-the-art baselines …☆136Updated last year
- Topic clustering library built on Transformer embeddings and cosine similarity metrics.Compatible with all BERT base transformers from hu…☆43Updated 3 years ago
- ☆77Updated 8 months ago
- A simple project training 3 separate NLP tasks simultaneously using Multitask-Learning☆23Updated last year
- A repo to explore different NLP tasks which can be solved using T5☆170Updated 3 years ago
- Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021☆34Updated 3 years ago
- code for the paper "Zero-Shot Text Classification with Self-Training" for EMNLP 2022☆49Updated last year
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆132Updated last year
- Official PyTorch Implementation of SSMix (Findings of ACL 2021)☆62Updated 3 years ago
- Research framework for low resource text classification that allows the user to experiment with classification models and active learning…☆99Updated 2 years ago
- A Natural Language Inference (NLI) model based on Transformers (BERT and ALBERT)☆133Updated 11 months ago
- A Self-Supervised Contrastive Learning Framework for Aspect Detection☆35Updated 3 years ago
- Master thesis with code investigating methods for incorporating long-context reasoning in low-resource languages, without the need to pre…☆32Updated 3 years ago
- Models for automatically transforming toxic text to neutral☆33Updated last year
- ☆42Updated 4 years ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆201Updated 2 years ago
- Define Transformers, T5 model and RoBERTa Encoder decoder model for product names generation☆48Updated 3 years ago
- On Explaining Your Explanations of BERT: An Empirical Study with Sequence Classification