Ankush7890 / ssfinetuningLinks
A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning
☆14Updated 3 years ago
Alternatives and similar repositories for ssfinetuning
Users that are interested in ssfinetuning are comparing it to the libraries listed below
Sorting:
- ☆46Updated 3 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Updated 3 years ago
- ☆21Updated 4 years ago
- ☆14Updated 11 months ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- Ranking of fine-tuned HF models as base models.☆36Updated 4 months ago
- ☆29Updated 3 years ago
- Embedding Recycling for Language models☆39Updated 2 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆81Updated 3 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- [ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators☆25Updated 2 years ago
- ☆12Updated last year
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆74Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆95Updated 2 years ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆21Updated 2 months ago
- ☆54Updated 2 years ago
- Hugging Face RoBERTa with Flash Attention 2☆23Updated this week
- CSS-LM: Contrastive Semi-supervised Fine-tuning of Pre-trained Language Models☆13Updated 2 years ago
- ☆13Updated 9 months ago
- Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data☆57Updated 4 years ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 2 years ago
- Language-agnostic BERT Sentence Embedding (LaBSE) Pytorch Model☆21Updated 5 years ago
- ☆59Updated 4 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- [COLM 2024] Early Weight Averaging meets High Learning Rates for LLM Pre-training☆17Updated 11 months ago
- Code for "Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking" (https://arxiv.org/abs/2…☆14Updated 2 years ago
- RATransformers 🐭- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!☆41Updated 2 years ago
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering☆16Updated 2 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated 2 years ago
- Ensembling Hugging Face transformers made easy☆63Updated 2 years ago