archinetai / smart-pytorchLinks
PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.
☆62Updated 3 years ago
Alternatives and similar repositories for smart-pytorch
Users that are interested in smart-pytorch are comparing it to the libraries listed below
Sorting:
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Updated 3 years ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆206Updated 3 years ago
- [EMNLP 2021] Improving and Simplifying Pattern Exploiting Training☆153Updated 3 years ago
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".☆66Updated 4 years ago
- Implementation of Mixout with PyTorch☆75Updated 3 years ago
- ☆59Updated 4 years ago
- State of the art Semantic Sentence Embeddings☆99Updated 3 years ago
- Source code for paper "Learning from Noisy Labels for Entity-Centric Information Extraction", EMNLP 2021☆55Updated 4 years ago
- Code for the paper "True Few-Shot Learning in Language Models" (https://arxiv.org/abs/2105.11447)☆144Updated 4 years ago
- Code for EMNLP 2021 paper: "Is Everything in Order? A Simple Way to Order Sentences"☆42Updated 2 years ago
- Interpreting Language Models with Contrastive Explanations (EMNLP 2022 Best Paper Honorable Mention)☆62Updated 3 years ago
- Google's BigBird (Jax/Flax & PyTorch) @ 🤗Transformers☆49Updated 2 years ago
- Long-context pretrained encoder-decoder models☆96Updated 3 years ago
- ☆68Updated 7 months ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆235Updated 2 years ago
- [EMNLP'21] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.☆78Updated 3 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆137Updated 2 years ago
- ☆81Updated 4 years ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆75Updated last year
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated 2 years ago
- This repo contains the dataset and description for Ruddit and its variants.☆36Updated 3 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆76Updated 4 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- Research framework for low resource text classification that allows the user to experiment with classification models and active learning…☆101Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- Source code for ACL 2022 paper "Self-contrastive Decorrelation for Sentence Embeddings".☆26Updated 9 months ago
- PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models☆110Updated last week
- EMNLP 2021 - Frustratingly Simple Pretraining Alternatives to Masked Language Modeling☆34Updated 4 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"☆50Updated 3 years ago
- ☆117Updated 3 years ago