IBM / URETLinks
Universal Robustness Evaluation Toolkit (for Evasion)
☆31Updated 3 months ago
Alternatives and similar repositories for URET
Users that are interested in URET are comparing it to the libraries listed below
Sorting:
- PhD/MSc course on Machine Learning Security (Univ. Cagliari)☆211Updated 2 months ago
- Code for the paper Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers☆59Updated 3 years ago
- A Python library for Secure and Explainable Machine Learning☆185Updated 2 months ago
- A curated list of academic events on AI Security & Privacy☆160Updated last year
- Repository for "SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques" publis…☆75Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆113Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆251Updated 3 weeks ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆50Updated 5 months ago
- Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs (ACM CCS'21)☆17Updated 2 years ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆54Updated 10 months ago
- ☆66Updated 4 years ago
- ☆121Updated last year
- ☆24Updated last year
- ARMORY Adversarial Robustness Evaluation Test Bed☆183Updated last year
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆27Updated 3 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆79Updated 2 years ago
- ☆44Updated 2 years ago
- The automated prompt injection framework for LLM-integrated applications.☆226Updated 11 months ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆267Updated last month
- LLM security and privacy☆50Updated 10 months ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆89Updated 11 months ago
- Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples☆19Updated 3 years ago
- TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classifica…☆300Updated last week
- ☆25Updated 3 years ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆91Updated last year
- Proof of concept code for poisoning code generation models.☆50Updated last year
- ☆147Updated 10 months ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆100Updated last year
- Privacy Testing for Deep Learning☆211Updated 2 years ago
- Machine Learning & Security Seminar @Purdue University☆25Updated 2 years ago