TaoYang225 / AD-DROPLinks
Source code of NeurIPS 2022 accepted paper "AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning"
☆23Updated 3 years ago
Alternatives and similar repositories for AD-DROP
Users that are interested in AD-DROP are comparing it to the libraries listed below
Sorting:
- code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification☆27Updated 3 years ago
- Code for ACL 2022 paper "BERT Learns to Teach: Knowledge Distillation with Meta Learning".☆87Updated 3 years ago
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆41Updated 3 years ago
- Source code for paper "Contrastive Out-of-Distribution Detection for Pretrained Transformers", EMNLP 2021☆40Updated 4 years ago
- The code for lifelong few-shot language learning☆55Updated 3 years ago
- Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"☆54Updated 3 years ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated 2 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆42Updated 4 years ago
- Official code for the paper "PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains".☆51Updated 3 years ago
- Crawl & visualize ICLR papers and reviews.☆18Updated 3 years ago
- For Certified Robustness to Text Adversarial Attacks by Randomized [MASK]☆17Updated last year
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆62Updated 4 years ago
- ☆157Updated 4 years ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated 2 years ago
- Codes for the paper: "Continual Learning for Text Classification with Information Disentanglement Based Regularization"☆44Updated 2 years ago
- ☆14Updated 3 years ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆105Updated 3 years ago
- Implementation of the research paper Consistent Representation Learning for Continual Relation Extraction (Findings of ACL 2022)☆26Updated 3 years ago
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆57Updated 2 years ago
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆25Updated 4 years ago
- ☆20Updated 5 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆128Updated last year
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated last year
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆38Updated 11 months ago
- [ICLR 2022] Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners☆130Updated 3 years ago
- self-adaptive in-context learning☆45Updated 2 years ago
- ☆21Updated 4 years ago
- ☆32Updated 3 years ago
- ICLR 2022☆18Updated 3 years ago
- Methods and evaluation for aligning language models temporally☆30Updated last year