TaoYang225 / AD-DROP
Source code of NeurIPS 2022 accepted paper "AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning"
☆24Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for AD-DROP
- Implementation of the research paper Consistent Representation Learning for Continual Relation Extraction (Findings of ACL 2022)☆25Updated 2 years ago
- code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification☆27Updated 2 years ago
- Code for CascadeBERT, Findings of EMNLP 2021☆11Updated 2 years ago
- [NAACL 2022] "Learning to Win Lottery Tickets in BERT Transfer via Task-agnostic Mask Training", Yuanxin Liu, Fandong Meng, Zheng Lin, Pe…☆15Updated 2 years ago
- Implementation for Variational Information Bottleneck for Effective Low-resource Fine-tuning, ICLR 2021☆38Updated 3 years ago
- [NeurIPS 2022] Non-Linguistic Supervision for Contrastive Learning of Sentence Embeddings☆20Updated last year
- ☆9Updated 2 months ago
- Resources for Retrieval Augmentation for Commonsense Reasoning: A Unified Approach. EMNLP 2022.☆19Updated last year
- Methods and evaluation for aligning language models temporally☆24Updated 8 months ago
- code for promptCSE, emnlp 2022☆10Updated last year
- [NAACL 2022] Contrastive Learning for Prompt-based Few-shot Language Learners☆22Updated last year
- Code for the ACL 2022 paper "Continual Sequence Generation with Adaptive Compositional Modules"☆38Updated 2 years ago
- EMNLP'2022: BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation☆38Updated 2 years ago
- Repo for outstanding paper@ACL 2023 "Do PLMs Know and Understand Ontological Knowledge?"☆27Updated last year
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆56Updated 3 years ago
- Crawl & visualize ICLR papers and reviews.☆18Updated 2 years ago
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆40Updated 2 years ago
- ACL'2023: Multi-Task Pre-Training of Modular Prompt for Few-Shot Learning☆41Updated 2 years ago
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Updated last year
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆23Updated 2 years ago
- Code for ACL 2022 paper "BERT Learns to Teach: Knowledge Distillation with Meta Learning".☆82Updated 2 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated 10 months ago
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆97Updated last year
- [Findings of EMNLP 2022] Holistic Sentence Embeddings for Better Out-of-Distribution Detection☆18Updated last year
- The code for lifelong few-shot language learning☆53Updated 2 years ago
- Code for "Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning" (EMNLP 2022) and "Empowering Parameter-Efficient Transfer Learning…☆11Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆38Updated last year
- ☆22Updated last year
- EMNLP 2022: Analyzing and Evaluating Faithfulness in Dialogue Summarization☆12Updated last year
- ☆32Updated 2 years ago