shahrukhx01 / multitask-learning-transformers
A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.
☆88Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for multitask-learning-transformers
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch …☆79Updated 2 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆133Updated last year
- ☆42Updated 2 years ago
- Long Document Summarization Papers☆136Updated last year
- https://arxiv.org/pdf/1909.04054☆77Updated 2 years ago
- ☆31Updated last year
- PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.☆59Updated 2 years ago
- Source code for paper "Learning from Noisy Labels for Entity-Centric Information Extraction", EMNLP 2021☆55Updated 2 years ago
- ☆57Updated last year
- Implementation of Self-adjusting Dice Loss from "Dice Loss for Data-imbalanced NLP Tasks" paper☆105Updated 3 years ago
- Topic clustering library built on Transformer embeddings and cosine similarity metrics.Compatible with all BERT base transformers from hu…☆41Updated 3 years ago
- ☆40Updated 3 years ago
- A repo to explore different NLP tasks which can be solved using T5☆169Updated 3 years ago
- Code accompanying EMNLP 2020 paper "Cold-start Active Learning through Self-supervised Language Modeling".☆40Updated 3 years ago
- Multi^2OIE: Multilingual Open Information Extraction Based on Multi-Head Attention with BERT (Findings of ACL: EMNLP 2020)☆57Updated 2 years ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆200Updated 2 years ago
- Efficient Attention for Long Sequence Processing☆87Updated 10 months ago
- Master thesis with code investigating methods for incorporating long-context reasoning in low-resource languages, without the need to pre…☆32Updated 3 years ago
- Research framework for low resource text classification that allows the user to experiment with classification models and active learning…☆97Updated 2 years ago
- ☆60Updated 3 years ago
- code for the paper "Zero-Shot Text Classification with Self-Training" for EMNLP 2022☆44Updated last year
- Fine-tune transformers with pytorch-lightning☆44Updated 2 years ago
- Enhancing the BERT training with Semi-supervised Generative Adversarial Networks in Pytorch/HuggingFace☆92Updated 2 years ago
- ☆47Updated last year
- A Natural Language Inference (NLI) model based on Transformers (BERT and ALBERT)☆129Updated 9 months ago
- ☆85Updated 2 years ago
- This shows how to fine-tune Bert language model and use PyTorch-transformers for text classififcation☆69Updated 4 years ago
- Benchmarking various Deep Learning models such as BERT, ALBERT, BiLSTMs on the task of sentence entailment using two datasets - MultiNLI …☆27Updated 3 years ago
- Code for SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics (ACL'2020).☆76Updated 4 years ago
- Official PyTorch Implementation of SSMix (Findings of ACL 2021)☆61Updated 3 years ago