Ankush7890 / ssfinetuningLinks
A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning
☆14Updated 4 years ago
Alternatives and similar repositories for ssfinetuning
Users that are interested in ssfinetuning are comparing it to the libraries listed below
Sorting:
- ☆46Updated 3 years ago
- ☆30Updated 3 years ago
- Embedding Recycling for Language models☆38Updated 2 years ago
- Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data☆57Updated 4 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Updated 3 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 3 years ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 3 years ago
- CSS-LM: Contrastive Semi-supervised Fine-tuning of Pre-trained Language Models☆12Updated 2 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- ☆54Updated 3 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- ☆14Updated last year
- ☆21Updated 4 years ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆75Updated last year
- ☆59Updated 4 years ago
- ☆12Updated 2 years ago
- Task Compass: Scaling Multi-task Pre-training with Task Prefix (EMNLP 2022: Findings) (stay tuned & more will be updated)☆22Updated 3 years ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'☆17Updated 3 years ago
- Hugging Face RoBERTa with Flash Attention 2☆24Updated 4 months ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 3 years ago
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆29Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- A extension of Transformers library to include T5ForSequenceClassification class.☆40Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆51Updated 2 years ago
- Language-agnostic BERT Sentence Embedding (LaBSE) Pytorch Model☆21Updated 5 years ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 3 years ago
- Emotion-Aware Dialogue Response Generation by Multi-Task Learning☆13Updated 4 years ago
- The dataset and code for ACL 2022 paper "SciNLI: A Corpus for Natural Language Inference on Scientific Text" are released here.☆28Updated 2 years ago
- A few-shot learning method based on siamese networks.☆28Updated 2 years ago