This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"
☆47Oct 13, 2025Updated 6 months ago
Alternatives and similar repositories for AI-Safety_SCAV
Users that are interested in AI-Safety_SCAV are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Multi-dimensional analysis of orthogonal safety directions in LLM alignment☆21Mar 20, 2025Updated last year
- ☆13May 17, 2025Updated 10 months ago
- ☆38Oct 15, 2024Updated last year
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆49Jan 15, 2026Updated 2 months ago
- The official repository for guided jailbreak benchmark☆29Jul 28, 2025Updated 8 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆14Jun 6, 2023Updated 2 years ago
- Code for our paper "Decomposing The Dark Matter of Sparse Autoencoders"☆23Feb 6, 2025Updated last year
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- A package that achieves 95%+ transfer attack success rate against GPT-4☆26Oct 24, 2024Updated last year
- Code to the paper: The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence☆28Jul 31, 2025Updated 8 months ago
- Official repository for "Boosting Adversarial Transferability using Dynamic Cues " (ICLR 2023)☆20Aug 24, 2023Updated 2 years ago
- All-in-one benchmarking platform for evaluating LLM.☆15Nov 12, 2025Updated 5 months ago
- The implement of paper:"ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability"☆65Jun 3, 2025Updated 10 months ago
- Improving Alignment and Robustness with Circuit Breakers☆260Sep 24, 2024Updated last year
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- [ICML 2023] Protecting Language Generation Models via Invisible Watermarking☆13Sep 8, 2023Updated 2 years ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆89Mar 30, 2025Updated last year
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Jan 13, 2023Updated 3 years ago
- ☆34Jun 28, 2025Updated 9 months ago
- [ICLR 2025] PyTorch Implementation of "ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time"☆31Jul 20, 2025Updated 8 months ago
- Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning [Accepted at ICML 2023]☆14Mar 31, 2024Updated 2 years ago
- Source of the ECCV22 paper "LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity"☆18Mar 12, 2025Updated last year
- ☆44Oct 1, 2024Updated last year
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆324May 13, 2025Updated 11 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- The official implementation of Preference Data Reward-Augmentation.☆18May 1, 2025Updated 11 months ago
- Reproduction Code for Paper "Investigating Multi-Hop Factual Shortcuts in Knowledge Editing of Large Language Models"☆14Jun 1, 2024Updated last year
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆30Jul 29, 2024Updated last year
- ☆45Jun 19, 2025Updated 9 months ago
- ☆20Jan 13, 2024Updated 2 years ago
- ICL backdoor attack☆17Nov 4, 2024Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆178Apr 23, 2025Updated 11 months ago
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆28Dec 21, 2025Updated 3 months ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆117Mar 26, 2024Updated 2 years ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆52Feb 8, 2025Updated last year
- ☆22Oct 25, 2024Updated last year
- ☆12Jul 16, 2025Updated 8 months ago
- Reasoning Activation in LLMs via Small Model Transfer (NeurIPS 2025)☆22Oct 16, 2025Updated 5 months ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- Repository for the Paper: Leave My Images Alone: Preventing Multi-Modal Large Language Models from Analyzing Images via Visual Prompt Inj…☆19Updated this week
- ☆23Dec 14, 2023Updated 2 years ago