archinetai / smart-pytorchLinks
PyTorch – SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models.
☆62Updated 3 years ago
Alternatives and similar repositories for smart-pytorch
Users that are interested in smart-pytorch are comparing it to the libraries listed below
Sorting:
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Updated 3 years ago
- Implementation of Mixout with PyTorch☆75Updated 2 years ago
- Code for EMNLP 2021 paper: "Is Everything in Order? A Simple Way to Order Sentences"☆42Updated 2 years ago
- Source code for paper "Learning from Noisy Labels for Entity-Centric Information Extraction", EMNLP 2021☆55Updated 3 years ago
- ☆59Updated 4 years ago
- [EMNLP 2021] Improving and Simplifying Pattern Exploiting Training☆153Updated 3 years ago
- State of the art Semantic Sentence Embeddings☆99Updated 3 years ago
- Long-context pretrained encoder-decoder models☆96Updated 2 years ago
- Code for the paper "True Few-Shot Learning in Language Models" (https://arxiv.org/abs/2105.11447)☆145Updated 4 years ago
- Code for the NAACL 2022 long paper "DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings"☆296Updated 2 years ago
- ☆68Updated 5 months ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆205Updated 3 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated 2 years ago
- ☆54Updated 2 years ago
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆74Updated last year
- A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approa…☆97Updated 3 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆235Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- This is the official implementation of NeurIPS 2021 "One Question Answering Model for Many Languages with Cross-lingual Dense Passage Ret…☆71Updated 3 years ago
- Code for the EMNLP 2021 Paper "Active Learning by Acquiring Contrastive Examples" & the ACL 2022 Paper "On the Importance of Effectively …☆128Updated 3 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆137Updated 2 years ago
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".☆66Updated 4 years ago
- [EMNLP 2021] Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification☆131Updated 2 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆76Updated 4 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- An official repository for MIA 2022 (NAACL 2022 Workshop) Shared Task on Cross-lingual Open-Retrieval Question Answering.☆31Updated 3 years ago
- Multilingual abstractive summarization dataset extracted from WikiHow.☆95Updated 7 months ago
- The official implementation of "Distilling Relation Embeddings from Pre-trained Language Models, EMNLP 2021 main conference", a high-qual…☆47Updated 10 months ago
- The official code for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization☆157Updated 2 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆117Updated 2 years ago