amazon-science / controlling-llm-memorizationLinks
β36Updated 2 years ago
Alternatives and similar repositories for controlling-llm-memorization
Users that are interested in controlling-llm-memorization are comparing it to the libraries listed below
Sorting:
- π€« Code and benchmark for our ICLR 2024 spotlight paper: "Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Conβ¦β42Updated last year
- β44Updated 6 months ago
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888β36Updated last year
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"β36Updated 5 months ago
- β13Updated 2 years ago
- Official Repository for Dataset Inference for LLMsβ36Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuningβ96Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"β59Updated 10 months ago
- Code for watermarking language modelsβ80Updated 11 months ago
- RΓΆttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"β106Updated 5 months ago
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022β30Updated 3 years ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Modelsβ82Updated 10 months ago
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confidβ¦β23Updated 2 years ago
- The official repository of the paper "On the Exploitability of Instruction Tuning".β64Updated last year
- β44Updated 2 years ago
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineerβ44Updated last year
- β55Updated 2 years ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMsβ87Updated 8 months ago
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156β35Updated last year
- LLM Unlearningβ172Updated last year
- β26Updated last year
- NeurIPS'24 - LLM Safety Landscapeβ25Updated 5 months ago
- Official code implementation of SKU, Accepted by ACL 2024 Findingsβ15Updated 7 months ago
- Restore safety in fine-tuned language models through task arithmeticβ28Updated last year
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Modelsβ84Updated 3 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024β77Updated 10 months ago
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"β57Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.β85Updated last year
- β38Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescueβ35Updated 2 months ago