arazd / ProgressivePrompts
Progressive Prompts: Continual Learning for Language Models
☆91Updated last year
Alternatives and similar repositories for ProgressivePrompts:
Users that are interested in ProgressivePrompts are comparing it to the libraries listed below
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆100Updated 2 years ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆97Updated last year
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆64Updated last year
- ☆127Updated 2 years ago
- ☆52Updated last year
- ☆25Updated last year
- ☆152Updated 3 years ago
- The code for lifelong few-shot language learning☆55Updated 2 years ago
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆86Updated last year
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆107Updated last year
- ☆60Updated 2 years ago
- Retrieval as Attention☆83Updated 2 years ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆71Updated last month
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- ☆75Updated last year
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆46Updated 2 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆97Updated last year
- ☆85Updated 2 years ago
- Code for the ACL 2022 paper "Continual Sequence Generation with Adaptive Compositional Modules"☆38Updated 2 years ago
- Residual Prompt Tuning: a method for faster and better prompt tuning.☆52Updated last year
- [ICLR 2022] Towards Continual Knowledge Learning of Language Models☆92Updated 2 years ago
- ☆165Updated last year
- Adding new tasks to T0 without catastrophic forgetting☆32Updated 2 years ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆38Updated last year
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- ☆42Updated last year
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆138Updated 2 years ago
- Let's Sample Step by Step: Adaptive-Consistency for Efficient Reasoning with LLMs☆34Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆26Updated 10 months ago