Trust4AI / ASTRALLinks
Automated Safety Testing of Large Language Models
☆15Updated 4 months ago
Alternatives and similar repositories for ASTRAL
Users that are interested in ASTRAL are comparing it to the libraries listed below
Sorting:
- Whispers in the Machine: Confidentiality in Agentic Systems☆39Updated last month
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents on…☆30Updated last week
- ☆66Updated 11 months ago
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆52Updated 10 months ago
- ☆21Updated last month
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆51Updated 10 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆49Updated 8 months ago
- ☆36Updated last month
- ☆74Updated 7 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆69Updated last year
- ☆89Updated 2 months ago
- The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.☆23Updated 7 months ago
- ☆34Updated 7 months ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆51Updated 2 months ago
- The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Lang…☆117Updated 5 months ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆88Updated last year
- Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique☆17Updated 10 months ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆74Updated last year
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆139Updated 6 months ago
- Official repository for the paper "ALERT: A Comprehensive Benchmark for Assessing Large Language Models’ Safety through Red Teaming"☆42Updated 9 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆76Updated last month
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆82Updated 4 months ago
- ☆31Updated 3 months ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆106Updated last year
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆33Updated 6 months ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆158Updated 2 months ago
- Fine-tuning base models to build robust task-specific models☆31Updated last year
- [NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"☆130Updated 2 months ago
- A prompt injection game to collect data for robust ML research☆62Updated 5 months ago
- General research for Dreadnode☆23Updated last year