shahrukhx01 / multitask-learning-transformersLinks
A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.
☆96Updated 3 years ago
Alternatives and similar repositories for multitask-learning-transformers
Users that are interested in multitask-learning-transformers are comparing it to the libraries listed below
Sorting:
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch …☆82Updated 2 years ago
- PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.☆63Updated 3 years ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆204Updated 3 years ago
- ☆45Updated 3 years ago
- A repo to explore different NLP tasks which can be solved using T5☆172Updated 4 years ago
- [DEPRECATED] Adapt Transformer-based language models to new text domains☆87Updated last year
- Main repository for "CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters"☆201Updated last year
- Define Transformers, T5 model and RoBERTa Encoder decoder model for product names generation☆48Updated 3 years ago
- ☆42Updated 3 years ago
- Collection of NLP model explanations and accompanying analysis tools☆144Updated 2 years ago
- Research framework for low resource text classification that allows the user to experiment with classification models and active learning…☆101Updated 3 years ago
- A Natural Language Inference (NLI) model based on Transformers (BERT and ALBERT)☆133Updated last year
- code for the paper "Zero-Shot Text Classification with Self-Training" for EMNLP 2022☆50Updated 3 months ago
- Models for automatically transforming toxic text to neutral☆35Updated last year
- ☆87Updated 3 years ago
- Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021☆36Updated 3 years ago
- Efficient Attention for Long Sequence Processing☆98Updated last year
- https://arxiv.org/pdf/1909.04054☆79Updated 2 years ago
- ☆59Updated 2 years ago
- Multilingual abstractive summarization dataset extracted from WikiHow.☆94Updated 5 months ago
- Implementation of Self-adjusting Dice Loss from "Dice Loss for Data-imbalanced NLP Tasks" paper☆108Updated 4 years ago
- A library to conduct ranking experiments with transformers.☆160Updated 2 years ago
- Abstractive opinion summarization system (SelSum) and the largest dataset of Amazon product summaries (AmaSum). EMNLP 2021 conference pap…☆47Updated 3 years ago
- Multi-task modelling extensions for huggingface transformers☆21Updated 2 years ago
- State of the art Semantic Sentence Embeddings☆99Updated 3 years ago
- Few-Shot-Intent-Detection includes popular challenging intent detection datasets with/without OOS queries and state-of-the-art baselines …☆150Updated 2 years ago
- Uncertainty-aware Self-training☆121Updated last year
- ☆33Updated 2 years ago
- Source code for paper "Learning from Noisy Labels for Entity-Centric Information Extraction", EMNLP 2021☆55Updated 3 years ago
- Master thesis with code investigating methods for incorporating long-context reasoning in low-resource languages, without the need to pre…