facebookresearch / perfect
PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
☆109Updated 3 years ago
Alternatives and similar repositories for perfect
Users that are interested in perfect are comparing it to the libraries listed below
Sorting:
- [NeurIPS'22 Spotlight] Data and code for our paper CoNT: Contrastive Neural Text Generation☆152Updated 2 years ago
- ☆117Updated 2 years ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆100Updated 2 years ago
- ☆35Updated last year
- Source code for ACL 2022 Paper "Prompt-based Data Augmentation for Low-Resource NLU Tasks"☆69Updated 2 years ago
- Official Code for "PPT: Pre-trained Prompt Tuning for Few-shot Learning". ACL 2022☆108Updated 2 years ago
- reStructured Pre-training☆98Updated 2 years ago
- ☆90Updated last year
- [NeurIPS 2022] Generating Training Data with Language Models: Towards Zero-Shot Language Understanding☆65Updated 2 years ago
- This respository contains the code for extracting the test samples we used in our paper: "A Multitask, Multilingual, Multimodal Evaluatio…☆77Updated last year
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆79Updated 8 months ago
- ☆26Updated 2 years ago
- Code for Editing Factual Knowledge in Language Models☆137Updated 3 years ago
- This repository is the official implementation of our paper MVP: Multi-task Supervised Pre-training for Natural Language Generation.☆72Updated 2 years ago
- TBC☆27Updated 2 years ago
- 🩺 A collection of ChatGPT evaluation reports on various bechmarks.☆49Updated 2 years ago
- ☆78Updated 2 years ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆180Updated 2 years ago
- An original implementation of "Noisy Channel Language Model Prompting for Few-Shot Text Classification"☆131Updated 3 years ago
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆167Updated 3 years ago
- EMNLP'2021: Simple Entity-centric Questions Challenge Dense Retrievers https://arxiv.org/abs/2109.08535☆146Updated 3 years ago
- Continue Pretraining T5 on custom dataset based on available pretrained model checkpoints☆38Updated 4 years ago
- Code for paper 'Data-Efficient FineTuning'☆29Updated last year
- Collection of scripts to pretrain T5 in unsupervised text, using PyTorch Lightning. CORD-19 pretraining provided as example.☆32Updated 4 years ago
- ☆66Updated 3 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆136Updated last year
- On Transferability of Prompt Tuning for Natural Language Processing☆99Updated last year
- [NLPCC 2022] Kformer: Knowledge Injection in Transformer Feed-Forward Layers☆37Updated 2 years ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆108Updated 9 months ago
- ☆38Updated last year