Ankush7890 / ssfinetuningLinks
A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning
☆14Updated 3 years ago
Alternatives and similar repositories for ssfinetuning
Users that are interested in ssfinetuning are comparing it to the libraries listed below
Sorting:
- ☆29Updated 3 years ago
- [ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators☆24Updated last year
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- Embedding Recycling for Language models☆38Updated last year
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆21Updated 4 months ago
- ☆12Updated 6 months ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆30Updated 3 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Updated 3 years ago
- The Implementation for the Paper "Time-Stamped Language Model: Teaching Language Models toUnderstand The Flow of Events"☆11Updated 4 years ago
- ☆46Updated 3 years ago
- ☆21Updated 3 years ago
- ☆14Updated 8 months ago
- Combining encoder-based language models☆11Updated 3 years ago
- ☆54Updated 2 years ago
- Multilingual Entity Linking model by BELA model☆12Updated last year
- CCQA A New Web-Scale Question Answering Dataset for Model Pre-Training☆32Updated 2 years ago
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 2 years ago
- Task Compass: Scaling Multi-task Pre-training with Task Prefix (EMNLP 2022: Findings) (stay tuned & more will be updated)☆22Updated 2 years ago
- codes and pre-trained models of paper "Segatron: Segment-aware Transformer for Language Modeling and Understanding"☆18Updated 2 years ago
- ☆18Updated 10 months ago
- ☆12Updated last year
- Ranking of fine-tuned HF models as base models.☆35Updated last month
- ☆11Updated 2 years ago
- ☆12Updated last year
- ☆20Updated 2 years ago
- A few-shot learning method based on siamese networks.☆28Updated 2 years ago
- [ACL 2023] Few-shot Reranking for Multi-hop QA via Language Model Prompting☆27Updated 2 years ago
- ☆13Updated 2 years ago
- ACL22 paper: Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost☆41Updated last year