shahrukhx01 / multitask-learning-transformersLinks
A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.
☆99Updated 3 years ago
Alternatives and similar repositories for multitask-learning-transformers
Users that are interested in multitask-learning-transformers are comparing it to the libraries listed below
Sorting:
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch …☆82Updated 3 years ago
- PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.☆62Updated 3 years ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆206Updated 3 years ago
- A repo to explore different NLP tasks which can be solved using T5☆173Updated 4 years ago
- Topic clustering library built on Transformer embeddings and cosine similarity metrics.Compatible with all BERT base transformers from hu…☆44Updated 4 years ago
- [DEPRECATED] Adapt Transformer-based language models to new text domains☆86Updated last year
- Define Transformers, T5 model and RoBERTa Encoder decoder model for product names generation☆48Updated 4 years ago
- Collection of NLP model explanations and accompanying analysis tools☆144Updated 2 years ago
- ☆42Updated 4 years ago
- ☆47Updated 3 years ago
- A Framework for Textual Entailment based Zero Shot text classification☆153Updated last year
- Main repository for "CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters"☆199Updated 2 years ago
- Code accompanying EMNLP 2020 paper "Cold-start Active Learning through Self-supervised Language Modeling".☆40Updated 4 years ago
- https://arxiv.org/pdf/1909.04054☆78Updated 3 years ago
- Research framework for low resource text classification that allows the user to experiment with classification models and active learning…☆101Updated 3 years ago
- Efficient Attention for Long Sequence Processing☆98Updated 2 years ago
- A Natural Language Inference (NLI) model based on Transformers (BERT and ALBERT)☆138Updated last year
- ☆88Updated 4 years ago
- Benchmarking various Deep Learning models such as BERT, ALBERT, BiLSTMs on the task of sentence entailment using two datasets - MultiNLI …☆28Updated 5 years ago
- ☆60Updated 3 years ago
- Few-Shot-Intent-Detection includes popular challenging intent detection datasets with/without OOS queries and state-of-the-art baselines …☆153Updated 2 years ago
- Source code for paper "Learning from Noisy Labels for Entity-Centric Information Extraction", EMNLP 2021☆55Updated 4 years ago
- Neural information retrieval / Semantic search / Bi-encoders☆174Updated 2 years ago
- Multilingual abstractive summarization dataset extracted from WikiHow.☆99Updated 10 months ago
- State of the art Semantic Sentence Embeddings☆100Updated 3 years ago
- ☆40Updated 2 years ago
- Implementation of Self-adjusting Dice Loss from "Dice Loss for Data-imbalanced NLP Tasks" paper☆109Updated 5 years ago
- ☆47Updated last week
- Code and data form the paper BERT Got a Date: Introducing Transformers to Temporal Tagging☆69Updated 3 years ago
- Some notebooks for NLP☆207Updated 2 years ago