TaoYang225 / AD-DROPLinks
Source code of NeurIPS 2022 accepted paper "AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning"
☆23Updated 3 years ago
Alternatives and similar repositories for AD-DROP
Users that are interested in AD-DROP are comparing it to the libraries listed below
Sorting:
- Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"☆54Updated 3 years ago
- Code for ACL 2022 paper "BERT Learns to Teach: Knowledge Distillation with Meta Learning".☆86Updated 3 years ago
- ☆14Updated 3 years ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification☆27Updated 3 years ago
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆56Updated 2 years ago
- Code for the paper: Rehearsal-free Continual Language Learning via Efficient Parameter Isolation☆12Updated 2 years ago
- Source code for paper "Contrastive Out-of-Distribution Detection for Pretrained Transformers", EMNLP 2021☆40Updated 3 years ago
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆41Updated 3 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆41Updated 4 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆102Updated 2 years ago
- (Unofficial) Implementation of ICLR 2021 paper "Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multil…☆14Updated 3 years ago
- For Certified Robustness to Text Adversarial Attacks by Randomized [MASK]☆17Updated last year
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆25Updated 3 years ago
- Official code for the paper "PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains".☆51Updated 3 years ago
- [ICLR 2022] Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners☆130Updated 2 years ago
- ☆13Updated 2 months ago
- ☆33Updated 4 years ago
- ☆32Updated 3 years ago
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆61Updated 3 years ago
- Code for "Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal" (ACL 2024)☆16Updated last year
- ICLR 2022☆18Updated 3 years ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆36Updated 9 months ago
- [ICLR 2025] Released code for paper "Spurious Forgetting in Continual Learning of Language Models"☆55Updated 5 months ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆63Updated 3 years ago
- ☆25Updated 4 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆126Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆172Updated last year
- self-adaptive in-context learning☆45Updated 2 years ago