Original Implementation of Prompt Tuning from Lester, et al, 2021
☆697Mar 6, 2025Updated last year
Alternatives and similar repositories for prompt-tuning
Users that are interested in prompt-tuning are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Prefix-Tuning: Optimizing Continuous Prompts for Generation☆962Apr 26, 2024Updated last year
- ☆350Aug 8, 2021Updated 4 years ago
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆167Sep 8, 2021Updated 4 years ago
- ☆26Aug 14, 2022Updated 3 years ago
- Official Code for "PPT: Pre-trained Prompt Tuning for Few-shot Learning". ACL 2022☆110Aug 10, 2022Updated 3 years ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- On Transferability of Prompt Tuning for Natural Language Processing☆100May 3, 2024Updated last year
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆104Dec 1, 2022Updated 3 years ago
- A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.☆938Oct 6, 2022Updated 3 years ago
- An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks☆2,076Nov 16, 2023Updated 2 years ago
- Must-read papers on prompt-based tuning for pre-trained language models.☆4,298Jul 17, 2023Updated 2 years ago
- Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"☆58Jun 27, 2022Updated 3 years ago
- An Open-Source Framework for Prompt-Learning.☆4,847Jul 16, 2024Updated last year
- ☆2,956Mar 9, 2026Updated 2 weeks ago
- Prompt tuning toolkit for GPT-2 and GPT-Neo☆90Sep 27, 2021Updated 4 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- An original implementation of "Noisy Channel Language Model Prompting for Few-Shot Text Classification"☆131Apr 23, 2022Updated 3 years ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆456Sep 6, 2023Updated 2 years ago
- ☆14May 3, 2022Updated 3 years ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆464Nov 5, 2022Updated 3 years ago
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆543Mar 24, 2022Updated 4 years ago
- Toolkit for creating, sharing and using natural language prompts.☆3,007Oct 23, 2023Updated 2 years ago
- ☆90Mar 29, 2024Updated 2 years ago
- A Unified Library for Parameter-Efficient and Modular Transfer Learning☆2,804Mar 21, 2026Updated last week
- [ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723☆729Aug 29, 2022Updated 3 years ago
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- ☆367Apr 12, 2024Updated last year
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,220Jul 19, 2024Updated last year
- AutoPrompt: Automatic Prompt Construction for Masked Language Models.☆641Aug 24, 2024Updated last year
- ☆400Oct 12, 2021Updated 4 years ago
- ☆1,559Updated this week
- Prompt tuning for GPT-J☆67Mar 10, 2023Updated 3 years ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,841Mar 18, 2026Updated last week
- This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"☆1,626Jun 12, 2023Updated 2 years ago
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240☆168Oct 7, 2022Updated 3 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆274Apr 15, 2023Updated 2 years ago
- Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"☆6,494Jan 14, 2026Updated 2 months ago
- ☆99Jul 25, 2023Updated 2 years ago
- NL-Augmenter 🦎 → 🐍 A Collaborative Repository of Natural Language Transformations☆786May 19, 2024Updated last year
- ☆158Aug 24, 2021Updated 4 years ago
- A modular RL library to fine-tune language models to human preferences☆2,383Mar 1, 2024Updated 2 years ago
- ☆184May 26, 2023Updated 2 years ago