DeepLearnXMU / SSR
Code for "Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal" (ACL 2024)
☆13Updated 6 months ago
Alternatives and similar repositories for SSR:
Users that are interested in SSR are comparing it to the libraries listed below
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆34Updated 3 months ago
- ☆26Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- Code for paper 'Batch-ICL: Effective, Efficient, and Order-Agnostic In-Context Learning'☆16Updated last year
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆23Updated 9 months ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆67Updated last year
- ☆29Updated 11 months ago
- ☆49Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆60Updated last year
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆72Updated 6 months ago
- Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge☆13Updated last year
- ☆38Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆67Updated 2 years ago
- ☆35Updated 6 months ago
- Active Example Selection for In-Context Learning (EMNLP'22)☆49Updated 9 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- ☆21Updated last month
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆109Updated 7 months ago
- ☆57Updated 9 months ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆80Updated 7 months ago
- ☆10Updated 2 months ago
- ☆21Updated last year
- ☆13Updated last year
- ☆34Updated last month
- ☆17Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.☆42Updated 3 weeks ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆30Updated 5 months ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆39Updated last year
- ☆24Updated 2 years ago
- ☆41Updated last year