On Transferability of Prompt Tuning for Natural Language Processing
☆100May 3, 2024Updated last year
Alternatives and similar repositories for Prompt-Transferability
Users that are interested in Prompt-Transferability are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code for "Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning" (EMNLP 2022) and "Empowering Parameter-Efficient Transfer Learning…☆11Feb 6, 2023Updated 3 years ago
- ☆11Nov 13, 2024Updated last year
- ☆26Aug 14, 2022Updated 3 years ago
- Task Compass: Scaling Multi-task Pre-training with Task Prefix (EMNLP 2022: Findings) (stay tuned & more will be updated)☆22Oct 17, 2022Updated 3 years ago
- ☆16Aug 14, 2022Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆21Dec 5, 2022Updated 3 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆104Dec 1, 2022Updated 3 years ago
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆699Mar 6, 2025Updated last year
- CSS-LM: Contrastive Semi-supervised Fine-tuning of Pre-trained Language Models☆12Jul 1, 2023Updated 2 years ago
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240☆168Oct 7, 2022Updated 3 years ago
- ☆14Apr 27, 2022Updated 4 years ago
- Neuron Activation☆26Nov 21, 2024Updated last year
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆113Apr 28, 2022Updated 4 years ago
- ☆26Nov 23, 2023Updated 2 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Official code for the paper "PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains".☆50Apr 17, 2022Updated 4 years ago
- ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Model…☆271Nov 8, 2022Updated 3 years ago
- ☆131Aug 18, 2022Updated 3 years ago
- ☆158Aug 24, 2021Updated 4 years ago
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Jan 9, 2025Updated last year
- Residual Prompt Tuning: a method for faster and better prompt tuning.☆56May 10, 2023Updated 2 years ago
- An Open-Source Framework for Prompt-Learning.☆4,857Jul 16, 2024Updated last year
- Official Code for "PPT: Pre-trained Prompt Tuning for Few-shot Learning". ACL 2022☆110Aug 10, 2022Updated 3 years ago
- An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi☆274Apr 15, 2023Updated 3 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆34Aug 9, 2023Updated 2 years ago
- ☆15Apr 19, 2021Updated 5 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85May 10, 2022Updated 3 years ago
- Implementation of the report: on the domain robustness of prefix and prompt tuning☆20Mar 10, 2022Updated 4 years ago
- CokeBERT: Contextual Knowledge Selection and Embedding towards Enhanced Pre-Trained Language Models☆31Jul 2, 2023Updated 2 years ago
- ☆11Jun 23, 2022Updated 3 years ago
- 😜Constrative Learning of Sentence Embedding using LoRA (EECS487 final project)