IBM / URETLinks
Universal Robustness Evaluation Toolkit (for Evasion)
☆31Updated last month
Alternatives and similar repositories for URET
Users that are interested in URET are comparing it to the libraries listed below
Sorting:
- Code for the paper Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers☆59Updated 3 years ago
- Repository for "SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques" publis…☆71Updated last year
- On Training Robust PDF Malware Classifiers (Usenix Security'20) https://arxiv.org/abs/1904.03542☆29Updated 3 years ago
- ☆116Updated 11 months ago
- Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs (ACM CCS'21)☆17Updated 2 years ago
- ☆66Updated 4 years ago
- ☆17Updated 3 years ago
- ☆24Updated 3 years ago
- PhD/MSc course on Machine Learning Security (Univ. Cagliari)☆210Updated 3 weeks ago
- Machine Learning & Security Seminar @Purdue University☆25Updated 2 years ago
- This repository contains code and data of the paper **On the Limitations of Continual Learning for Malware Classification**, accepted to …☆18Updated last year
- ☆24Updated 2 years ago
- A curated list of academic events on AI Security & Privacy☆153Updated 10 months ago
- ☆23Updated last year
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆77Updated 2 years ago
- ☆44Updated 8 months ago
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆28Updated 3 years ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆96Updated 10 months ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆47Updated 3 months ago
- Reward Guided Test Generation for Deep Learning☆20Updated 10 months ago
- Source code for the Energy-Latency Attacks via Sponge Poisoning paper.☆15Updated 3 years ago
- Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples☆19Updated 3 years ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆49Updated 8 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆69Updated last year
- LLM Self Defense: By Self Examination, LLMs know they are being tricked☆34Updated last year
- A Python library for Secure and Explainable Machine Learning☆180Updated this week
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆88Updated last year
- LLM security and privacy☆48Updated 8 months ago
- Library for training globally-robust neural networks.☆28Updated last year