UIC-Liu-Lab / DGALinks
[EMNLP 2022] Adapting a Language Model While Preserving its General Knowledge
☆21Updated 2 years ago
Alternatives and similar repositories for DGA
Users that are interested in DGA are comparing it to the libraries listed below
Sorting:
- [EMNLP 2022] Continual Training of Language Models for Few-Shot Learning☆45Updated 2 years ago
- ☆41Updated last year
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆102Updated 2 years ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated 10 months ago
- [NAACL 2022] "Learning to Win Lottery Tickets in BERT Transfer via Task-agnostic Mask Training", Yuanxin Liu, Fandong Meng, Zheng Lin, Pe…☆15Updated 2 years ago
- The code for lifelong few-shot language learning☆55Updated 3 years ago
- TBC☆27Updated 2 years ago
- [NeurIPS 2023] Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective☆33Updated last year
- ☆54Updated 2 years ago
- ☆32Updated 3 years ago
- ☆51Updated last year
- Dataset for Unified Editing, EMNLP 2023. This is a model editing dataset where edits are natural language phrases.☆23Updated 9 months ago
- Analyzing LLM Alignment via Token distribution shift☆16Updated last year
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- ☆27Updated 2 years ago
- ☆36Updated last year
- An Empirical Study On Contrastive Search And Contrastive Decoding For Open-ended Text Generation☆27Updated last year
- Collections of IR Research☆35Updated last month
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆16Updated 3 years ago
- "FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning" (ACL 2023)☆14Updated last year
- code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification☆27Updated 3 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- ☆28Updated last year
- Codebase for Hyperdecoders https://arxiv.org/abs/2203.08304☆11Updated 2 years ago
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆48Updated 3 years ago
- Code for paper 'Data-Efficient FineTuning'☆29Updated 2 years ago
- Methods and evaluation for aligning language models temporally☆29Updated last year
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆65Updated 2 years ago
- ICLR 2022☆17Updated 3 years ago