JosselinSomervilleRoberts / BERT-Multitask-learningLinks
Multitask-learning of a BERT backbone. Allows to easily train a BERT model with state-of-the-art method such as PCGrad, Gradient Vaccine, PALs, Scheduling, Class imbalance handling and many optimizations
☆20Updated 2 years ago
Alternatives and similar repositories for BERT-Multitask-learning
Users that are interested in BERT-Multitask-learning are comparing it to the libraries listed below
Sorting:
- PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.☆62Updated 3 years ago
- A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approa…☆99Updated 3 years ago
- A extension of Transformers library to include T5ForSequenceClassification class.☆40Updated 2 years ago
- Code for ACL 2023 paper "HiTIN: Hierarchy-aware Tree Isomorphism Network for Hierarchical Text Classification"☆38Updated last year
- ☆48Updated 3 years ago
- Code for "Finetuning Pretrained Transformers into Variational Autoencoders"☆40Updated 3 years ago
- Leveraging ChatGPT for Text Data Augmentation☆54Updated last year
- In this implementation, using the Flan T5 large language model, we performed the Text Classification task on the IMDB dataset and obtaine…☆23Updated 2 years ago
- This repository is the official implementation of our paper MVP: Multi-task Supervised Pre-training for Natural Language Generation.☆73Updated 3 years ago
- A pytorch &keras implementation and demo of Fastformer.☆192Updated 3 years ago
- PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models☆111Updated last month
- Exploiting Global and Local Hierarchies for Hierarchical Text Classification☆29Updated 3 years ago
- ☆19Updated 4 years ago
- This repository contains a custom implementation of the BERT model, fine-tuned for specific tasks, along with an implementation of Low Ra…☆78Updated 2 years ago
- code for the paper "Zero-Shot Text Classification with Self-Training" for EMNLP 2022☆51Updated 4 months ago
- Define Transformers, T5 model and RoBERTa Encoder decoder model for product names generation☆48Updated 4 years ago
- Code for the NAACL 2022 long paper "DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings"☆296Updated 3 years ago
- Code for our paper "Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation"☆166Updated 3 years ago
- Fine tune a T5 transformer model using PyTorch & Transformers🤗☆220Updated 5 years ago
- [EMNLP 2021] Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification☆130Updated 2 years ago
- [NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Lea…☆76Updated 2 years ago
- A curated list of zero-shot learning in NLP. :-)☆15Updated 4 years ago
- Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations☆133Updated 6 months ago
- A multi-purpose toolkit for table-to-text generation: web interface, Python bindings, CLI commands.☆57Updated last year
- Official implementation for the paper "A Cheaper and Better Diffusion Language Model with Soft-Masked Noise"☆59Updated 2 years ago
- ☆52Updated 4 years ago
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch …☆82Updated 3 years ago
- Reduce the size of pretrained Hugging Face models via vocabulary trimming.☆48Updated 3 years ago
- Official Implementation of "DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization."☆143Updated 3 years ago
- Collection of scripts to pretrain T5 in unsupervised text, using PyTorch Lightning. CORD-19 pretraining provided as example.☆32Updated 4 years ago