☆40Nov 4, 2024Updated last year
Alternatives and similar repositories for starter-kit
Users that are interested in starter-kit are comparing it to the libraries listed below
Sorting:
- [ICML 2023] "Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?" by Ruisi Cai, Zhenyu Zhang, Zhangyang Wang☆16May 4, 2023Updated 2 years ago
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆13Dec 16, 2024Updated last year
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆37Jun 1, 2025Updated 9 months ago
- Official github page for the paper "Evaluating Deep Unlearning in Large Language Model"☆14Apr 25, 2025Updated 10 months ago
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 9 months ago
- Playing around with various jailbreaking techniques ahead of the Gray Swan AI Ultimate Jailbreaking Competition☆18Oct 6, 2024Updated last year
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆20Jan 19, 2025Updated last year
- ☆27Sep 22, 2025Updated 5 months ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆318May 13, 2025Updated 9 months ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆58Dec 20, 2024Updated last year
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆163Nov 30, 2024Updated last year
- ☆72Mar 30, 2025Updated 11 months ago
- This is the official repository for the "Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP" paper acce…☆25Feb 16, 2026Updated 2 weeks ago
- ☆28Oct 14, 2021Updated 4 years ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆70Feb 22, 2024Updated 2 years ago
- Code used in our paper "Robust Deep Reinforment Learning through Adversarial Loss"☆33Oct 3, 2023Updated 2 years ago
- TensorFlow API analysis tool and malicious model detection tool☆39May 27, 2025Updated 9 months ago
- [NeurIPS 2024] Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling☆34Nov 8, 2024Updated last year
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆116Jun 13, 2024Updated last year
- Fine-tuning base models to build robust task-specific models☆34Apr 11, 2024Updated last year
- ☆75Jan 21, 2026Updated last month
- TAP: An automated jailbreaking method for black-box LLMs☆221Dec 10, 2024Updated last year
- Auditing agents for fine-tuning safety☆20Oct 21, 2025Updated 4 months ago
- ☆224Aug 17, 2025Updated 6 months ago
- ☆35Sep 13, 2023Updated 2 years ago
- ☆33Jun 24, 2024Updated last year
- ☆37Oct 2, 2024Updated last year
- Example code of [Tianchi AAAI2022 Security AI Challenger Program Phase 8]☆22Feb 9, 2022Updated 4 years ago
- ☆38Oct 3, 2023Updated 2 years ago
- A Framework for Evaluating AI Agent Safety in Realistic Environments☆30Oct 2, 2025Updated 5 months ago
- Project exploring 3D volumetric rendering of NEXRAD radar data.☆11Oct 23, 2023Updated 2 years ago
- ☆12May 6, 2022Updated 3 years ago
- [ACL24] Official Repo of Paper `ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs`☆94Aug 15, 2025Updated 6 months ago
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆36Jun 8, 2023Updated 2 years ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆42Jan 25, 2024Updated 2 years ago
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆44Apr 21, 2024Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆377Jan 23, 2025Updated last year
- Interpretating the latent space representations of attention head outputs for LLMs☆38Aug 13, 2024Updated last year
- ☆698Jul 2, 2025Updated 8 months ago