This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"
☆47Oct 13, 2025Updated 5 months ago
Alternatives and similar repositories for AI-Safety_SCAV
Users that are interested in AI-Safety_SCAV are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆21Mar 20, 2025Updated last year
- ☆13May 17, 2025Updated 10 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆49Jan 15, 2026Updated 2 months ago
- ☆14Jun 6, 2023Updated 2 years ago
- Code for our paper "Decomposing The Dark Matter of Sparse Autoencoders"☆23Feb 6, 2025Updated last year
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- Code to the paper: The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence☆25Jul 31, 2025Updated 7 months ago
- Official repository for "Boosting Adversarial Transferability using Dynamic Cues " (ICLR 2023)☆20Aug 24, 2023Updated 2 years ago
- All-in-one benchmarking platform for evaluating LLM.☆15Nov 12, 2025Updated 4 months ago
- The implement of paper:"ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability"☆64Jun 3, 2025Updated 9 months ago
- [ICML 2023] Protecting Language Generation Models via Invisible Watermarking☆13Sep 8, 2023Updated 2 years ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆89Mar 30, 2025Updated 11 months ago
- ☆32Jun 28, 2025Updated 8 months ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Jan 13, 2023Updated 3 years ago
- [ICLR 2025] PyTorch Implementation of "ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time"☆30Jul 20, 2025Updated 8 months ago
- Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning [Accepted at ICML 2023]☆14Mar 31, 2024Updated last year
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆322May 13, 2025Updated 10 months ago
- The official implementation of Preference Data Reward-Augmentation.☆18May 1, 2025Updated 10 months ago
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆29Jul 29, 2024Updated last year
- Reproduction Code for Paper "Investigating Multi-Hop Factual Shortcuts in Knowledge Editing of Large Language Models"☆13Jun 1, 2024Updated last year
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆27Dec 21, 2025Updated 3 months ago
- ☆45Jun 19, 2025Updated 9 months ago
- ICL backdoor attack☆17Nov 4, 2024Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆176Apr 23, 2025Updated 11 months ago
- ☆20Jan 13, 2024Updated 2 years ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆118Mar 26, 2024Updated last year
- ☆52Feb 8, 2025Updated last year
- ☆22Oct 25, 2024Updated last year
- Algebraic value editing in pretrained language models☆69Nov 1, 2023Updated 2 years ago
- ☆12Jul 16, 2025Updated 8 months ago
- Reasoning Activation in LLMs via Small Model Transfer (NeurIPS 2025)☆22Oct 16, 2025Updated 5 months ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- Repository for the Paper: Refusing Safe Prompts for Multi-modal Large Language Models☆18Oct 16, 2024Updated last year
- ☆23Dec 14, 2023Updated 2 years ago
- AISafetyLab: A comprehensive framework covering safety attack, defense, evaluation and paper list.☆235Aug 29, 2025Updated 6 months ago
- Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.☆15Mar 18, 2024Updated 2 years ago
- FGLA: Fast Generation-Based Gradient Leakage Attacks against Highly Compressed Gradients☆14Mar 17, 2026Updated last week
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆67Jun 9, 2025Updated 9 months ago
- ☆15Mar 13, 2025Updated last year