shahrukhx01 / multitask-learning-transformersLinks
A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.
☆96Updated 3 years ago
Alternatives and similar repositories for multitask-learning-transformers
Users that are interested in multitask-learning-transformers are comparing it to the libraries listed below
Sorting:
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch …☆82Updated 2 years ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆204Updated 3 years ago
- [DEPRECATED] Adapt Transformer-based language models to new text domains☆87Updated last year
- PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.☆62Updated 3 years ago
- ☆42Updated 4 years ago
- ☆46Updated 3 years ago
- Collection of NLP model explanations and accompanying analysis tools☆144Updated 2 years ago
- Research framework for low resource text classification that allows the user to experiment with classification models and active learning…☆101Updated 3 years ago
- Define Transformers, T5 model and RoBERTa Encoder decoder model for product names generation☆48Updated 3 years ago
- Code accompanying EMNLP 2020 paper "Cold-start Active Learning through Self-supervised Language Modeling".☆40Updated 4 years ago
- Few-Shot-Intent-Detection includes popular challenging intent detection datasets with/without OOS queries and state-of-the-art baselines …☆151Updated 2 years ago
- code for the paper "Zero-Shot Text Classification with Self-Training" for EMNLP 2022☆50Updated 4 months ago
- https://arxiv.org/pdf/1909.04054☆79Updated 2 years ago
- A repo to explore different NLP tasks which can be solved using T5☆172Updated 4 years ago
- Main repository for "CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters"☆201Updated last year
- A Framework for Textual Entailment based Zero Shot text classification☆152Updated last year
- ☆47Updated 2 years ago
- Models for automatically transforming toxic text to neutral☆35Updated last year
- ☆33Updated 2 years ago
- Code and data form the paper BERT Got a Date: Introducing Transformers to Temporal Tagging☆68Updated 3 years ago
- ☆59Updated 2 years ago
- This repository contains the code, data, and models of the paper titled "XL-Sum: Large-Scale Multilingual Abstractive Summarization for 4…☆275Updated last year
- Efficient Attention for Long Sequence Processing☆98Updated last year
- ☆87Updated 3 years ago
- A Natural Language Inference (NLI) model based on Transformers (BERT and ALBERT)☆135Updated last year
- Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021☆37Updated 3 years ago
- A library to conduct ranking experiments with transformers.☆160Updated 2 years ago
- Neural information retrieval / Semantic search / Bi-encoders☆174Updated 2 years ago
- A simple project training 3 separate NLP tasks simultaneously using Multitask-Learning☆23Updated 2 years ago
- The official code for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization☆156Updated 2 years ago