This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"
☆48Oct 13, 2025Updated 6 months ago
Alternatives and similar repositories for AI-Safety_SCAV
Users that are interested in AI-Safety_SCAV are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Multi-dimensional analysis of orthogonal safety directions in LLM alignment☆22Mar 20, 2025Updated last year
- ☆16May 17, 2025Updated 11 months ago
- ☆39Oct 15, 2024Updated last year
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆49Jan 15, 2026Updated 3 months ago
- Official Code for What Makes and Breaks Safety Fine-tuning? A Mechanistic Study (NeurIPS 2024)☆12Oct 31, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Code for our paper "Decomposing The Dark Matter of Sparse Autoencoders"☆23Feb 6, 2025Updated last year
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- A package that achieves 95%+ transfer attack success rate against GPT-4☆26Oct 24, 2024Updated last year
- Code to the paper: The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence☆29Jul 31, 2025Updated 9 months ago
- Official repository for "Boosting Adversarial Transferability using Dynamic Cues " (ICLR 2023)☆20Aug 24, 2023Updated 2 years ago
- All-in-one benchmarking platform for evaluating LLM.☆15Nov 12, 2025Updated 5 months ago
- The implement of paper:"ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability"☆65Jun 3, 2025Updated 11 months ago
- [ICML 2023] Protecting Language Generation Models via Invisible Watermarking☆13Sep 8, 2023Updated 2 years ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆90Mar 30, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Jan 13, 2023Updated 3 years ago
- ☆35Jun 28, 2025Updated 10 months ago
- [ICLR 2025] PyTorch Implementation of "ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time"☆31Jul 20, 2025Updated 9 months ago
- Source of the ECCV22 paper "LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity"☆18Mar 12, 2025Updated last year
- ☆44Oct 1, 2024Updated last year
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆331May 13, 2025Updated 11 months ago
- The official implementation of Preference Data Reward-Augmentation.☆18May 1, 2025Updated last year
- Reproduction Code for Paper "Investigating Multi-Hop Factual Shortcuts in Knowledge Editing of Large Language Models"☆14Jun 1, 2024Updated last year
- [CIKM 2024] Trojan Activation Attack: Attack Large Language Models using Activation Steering for Safety-Alignment.☆30Jul 29, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆45Jun 19, 2025Updated 10 months ago
- ICL backdoor attack☆17Nov 4, 2024Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆181Apr 23, 2025Updated last year
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆28Dec 21, 2025Updated 4 months ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆119Mar 26, 2024Updated 2 years ago
- ☆53Feb 8, 2025Updated last year
- ☆22Oct 25, 2024Updated last year
- Algebraic value editing in pretrained language models☆70Nov 1, 2023Updated 2 years ago
- ☆12Jul 16, 2025Updated 9 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Reasoning Activation in LLMs via Small Model Transfer (NeurIPS 2025)☆22Oct 16, 2025Updated 6 months ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- Repository for the Paper: Leave My Images Alone: Preventing Multi-Modal Large Language Models from Analyzing Images via Visual Prompt Inj…☆19Apr 17, 2026Updated 2 weeks ago
- ☆23Dec 14, 2023Updated 2 years ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Jan 27, 2024Updated 2 years ago
- Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.☆14Mar 18, 2024Updated 2 years ago
- FGLA: Fast Generation-Based Gradient Leakage Attacks against Highly Compressed Gradients☆14Mar 17, 2026Updated last month