Azure / counterfit
a CLI that provides a generic automation layer for assessing the security of ML models
☆800Updated 11 months ago
Related projects: ⓘ
- Adversarial Threat Landscape for AI Systems☆1,037Updated last year
- ARMORY Adversarial Robustness Evaluation Test Bed☆174Updated 8 months ago
- Privacy Testing for Deep Learning☆183Updated last year
- Test Software for the Characterization of AI Technologies☆212Updated last week
- A Python library for Secure and Explainable Machine Learning☆144Updated 4 months ago
- OWASP Foundation Web Respository☆504Updated last week
- A toolkit for tools and techniques related to the privacy and compliance of AI models.☆96Updated 2 months ago
- OWASP Foundation Web Respository☆199Updated last month
- PhD/MSc course on Machine Learning Security (Univ. Cagliari)☆190Updated this week
- ☆120Updated 2 years ago
- Sophos-ReversingLabs 20 million sample dataset☆624Updated 3 years ago
- Protection against Model Serialization Attacks☆273Updated this week
- Differential privacy validator and runtime☆290Updated 2 years ago
- SecML-Torch: A Library for Robustness Evaluation of Deep Learning Models☆19Updated last month
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆74Updated last year
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆218Updated 7 months ago
- OWASP Machine Learning Security Top 10 Project☆69Updated last week
- Create adversarial attacks against machine learning Windows malware detectors☆202Updated 2 months ago
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.☆581Updated 3 weeks ago
- A curated list of large language model tools for cybersecurity research.☆376Updated 5 months ago
- Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and…☆4,761Updated this week
- LLM vulnerability scanner☆1,273Updated this week
- CALDERA plugin for adversary emulation of AI-enabled systems☆82Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆103Updated 6 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆299Updated 7 months ago
- An awesome list of papers on privacy attacks against machine learning☆552Updated 6 months ago
- Understand adversary tradecraft and improve detection strategies☆699Updated last year
- A curated list of academic events on AI Security & Privacy☆128Updated last month
- Dropbox LLM Security research code and results☆210Updated 4 months ago
- Code samples and documentation for SmartNoise differential privacy tools☆132Updated 2 years ago