locuslab / acr-memorization
☆31Updated 3 months ago
Alternatives and similar repositories for acr-memorization:
Users that are interested in acr-memorization are comparing it to the libraries listed below
- ☆54Updated 2 years ago
- ☆42Updated 2 months ago
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆52Updated last month
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆96Updated last month
- ☆21Updated last month
- Code for safety test in "Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates"☆18Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆86Updated 9 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆60Updated 3 months ago
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆52Updated 11 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆90Updated 10 months ago
- Code for "Universal Adversarial Triggers Are Not Universal."☆17Updated 11 months ago
- Official Repository for Dataset Inference for LLMs☆33Updated 8 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆29Updated 2 months ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆76Updated 5 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆35Updated 5 months ago
- ☆26Updated 9 months ago
- ☆21Updated 8 months ago
- ☆54Updated 8 months ago
- ☆34Updated 6 months ago
- ☆41Updated last year
- ☆21Updated 4 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆71Updated last month
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆33Updated 2 months ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 2 months ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 4 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆74Updated 2 weeks ago
- This is the official code for the paper "Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable".☆14Updated last month
- ☆23Updated last month
- ☆42Updated 2 years ago
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆37Updated 2 months ago