UIC-Liu-Lab / CPT
[EMNLP 2022] Continual Training of Language Models for Few-Shot Learning
☆45Updated 2 years ago
Alternatives and similar repositories for CPT:
Users that are interested in CPT are comparing it to the libraries listed below
- [EMNLP 2022] Adapting a Language Model While Preserving its General Knowledge☆21Updated 2 years ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆100Updated 2 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆45Updated 2 years ago
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆47Updated 3 years ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆67Updated last year
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆75Updated last year
- ☆52Updated last year
- ☆32Updated 3 years ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆71Updated 2 years ago
- Progressive Prompts: Continual Learning for Language Models☆92Updated last year
- The code for lifelong few-shot language learning☆55Updated 3 years ago
- ☆85Updated 2 years ago
- Repo for ACL2023 Findings paper "Emergent Modularity in Pre-trained Transformers"☆23Updated last year
- ☆48Updated last year
- Code for the ACL 2022 paper "Continual Sequence Generation with Adaptive Compositional Modules"☆38Updated 3 years ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆44Updated 2 years ago
- ☆129Updated 8 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆36Updated last year
- Code for "Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning" (EMNLP 2022) and "Empowering Parameter-Efficient Transfer Learning…☆12Updated 2 years ago
- ☆19Updated 2 years ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated 11 months ago
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Updated 2 years ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆39Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆49Updated 2 years ago
- Retrieval as Attention☆83Updated 2 years ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆39Updated 3 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- Analyzing LLM Alignment via Token distribution shift☆16Updated last year