shahrukhx01 / multitask-learning-transformers
A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.
☆95Updated 2 years ago
Alternatives and similar repositories for multitask-learning-transformers:
Users that are interested in multitask-learning-transformers are comparing it to the libraries listed below
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch …☆81Updated 2 years ago
- ☆44Updated 2 years ago
- ☆33Updated 2 years ago
- Define Transformers, T5 model and RoBERTa Encoder decoder model for product names generation☆48Updated 3 years ago
- ☆41Updated 3 years ago
- PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.☆62Updated 2 years ago
- Code accompanying EMNLP 2020 paper "Cold-start Active Learning through Self-supervised Language Modeling".☆40Updated 3 years ago
- Implementation of Self-adjusting Dice Loss from "Dice Loss for Data-imbalanced NLP Tasks" paper☆109Updated 4 years ago
- Topic clustering library built on Transformer embeddings and cosine similarity metrics.Compatible with all BERT base transformers from hu…☆43Updated 3 years ago
- A simple project training 3 separate NLP tasks simultaneously using Multitask-Learning☆23Updated last year
- 👾 A library of state-of-the-art pretrained models for Natural Language Processing (NLP)☆23Updated 5 years ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆202Updated 2 years ago
- Source code for paper "Learning from Noisy Labels for Entity-Centric Information Extraction", EMNLP 2021☆55Updated 3 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated last year
- Efficient Attention for Long Sequence Processing☆93Updated last year
- A repo to explore different NLP tasks which can be solved using T5☆172Updated 4 years ago
- code for the paper "Zero-Shot Text Classification with Self-Training" for EMNLP 2022☆49Updated 2 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆134Updated last year
- Uncertainty-aware Self-training☆121Updated last year
- A curated list of few-shot learning in NLP. :-)☆64Updated 3 years ago
- ☆87Updated 3 years ago
- AttentionRank: Unsupervised keyphrase Extraction using Self and Cross Attentions☆26Updated last year
- ☆120Updated 5 years ago
- A Light and Modular PyTorch NLP Project Template☆59Updated 4 years ago
- ☆77Updated 11 months ago
- Collection of NLP model explanations and accompanying analysis tools☆145Updated last year
- ☆58Updated 2 years ago
- A library to conduct ranking experiments with transformers.☆161Updated last year
- Research framework for low resource text classification that allows the user to experiment with classification models and active learning…☆102Updated 3 years ago
- A Self-Supervised Contrastive Learning Framework for Aspect Detection☆35Updated 3 years ago