exelents / soft-prompt-tuningLinks
Prompt tuning for GPT-J
☆68Updated 2 years ago
Alternatives and similar repositories for soft-prompt-tuning
Users that are interested in soft-prompt-tuning are comparing it to the libraries listed below
Sorting:
- Code for the paper Code for the paper InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning☆100Updated 2 years ago
- Code for Editing Factual Knowledge in Language Models☆142Updated 3 years ago
- First explanation metric (diagnostic report) for text generation evaluation☆62Updated 10 months ago
- The code implementation of the EMNLP2022 paper: DisCup: Discriminator Cooperative Unlikelihood Prompt-tuning for Controllable Text Gene…☆27Updated 2 years ago
- Token-level Reference-free Hallucination Detection☆97Updated 2 years ago
- ☆90Updated last year
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆167Updated 4 years ago
- ☆88Updated 2 years ago
- [EMNLP 2022] Training Language Models with Memory Augmentation https://arxiv.org/abs/2205.12674☆195Updated 2 years ago
- On Transferability of Prompt Tuning for Natural Language Processing☆100Updated last year
- [NeurIPS'22 Spotlight] Data and code for our paper CoNT: Contrastive Neural Text Generation☆153Updated 2 years ago
- Repository for EMNLP 2022 Paper: Towards a Unified Multi-Dimensional Evaluator for Text Generation☆214Updated last year
- Technical Report: Is ChatGPT a Good NLG Evaluator? A Preliminary Study☆43Updated 2 years ago
- ☆46Updated last year
- ☆83Updated 2 years ago
- A dataset for training/evaluating Question Answering Retrieval models on ChatGPT responses with the possibility to training/evaluating on…☆141Updated 2 years ago
- ☆89Updated 3 years ago
- Collection of scripts to pretrain T5 in unsupervised text, using PyTorch Lightning. CORD-19 pretraining provided as example.☆32Updated 4 years ago
- This respository contains the code for extracting the test samples we used in our paper: "A Multitask, Multilingual, Multimodal Evaluatio…☆81Updated 2 years ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆183Updated 3 years ago
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆82Updated 3 years ago
- Code and models for the paper "Questions Are All You Need to Train a Dense Passage Retriever (TACL 2023)"☆62Updated 3 years ago
- ☆83Updated last week
- Code base of In-Context Learning for Dialogue State tracking☆45Updated 2 years ago
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆272Updated 2 years ago
- Code, datasets, and checkpoints for the paper "Improving Passage Retrieval with Zero-Shot Question Generation (EMNLP 2022)"☆100Updated 3 years ago
- Source code for ACL 2022 Paper "Prompt-based Data Augmentation for Low-Resource NLU Tasks"☆71Updated 2 years ago
- Detect hallucinated tokens for conditional sequence generation.☆64Updated 3 years ago
- Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System (ACL 2022)☆161Updated 2 years ago
- Prompt tuning toolkit for GPT-2 and GPT-Neo☆89Updated 4 years ago