facebookresearch / privacy_adversarial_framework
The Privacy Adversarial Framework (PAF) is a knowledge base of privacy-focused adversarial tactics and techniques. PAF is heavily inspired by MITRE ATT&CK®.
☆54Updated last year
Related projects ⓘ
Alternatives and complementary repositories for privacy_adversarial_framework
- Tree of Attacks (TAP) Jailbreaking Implementation☆95Updated 9 months ago
- using ML models for red teaming☆39Updated last year
- Data Scientists Go To Jupyter☆57Updated this week
- ☆94Updated last month
- An interactive CLI application for interacting with authenticated Jupyter instances.☆48Updated 8 months ago
- LLM Testing Findings Templates☆65Updated 9 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆25Updated 5 months ago
- CALDERA plugin for adversary emulation of AI-enabled systems☆85Updated last year
- Secure Jupyter Notebooks and Experimentation Environment☆56Updated last month
- future-proof vulnerability detection benchmark, based on CVEs in open-source repos☆44Updated last week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆149Updated last year
- ☆22Updated 9 months ago
- Payloads for Attacking Large Language Models☆64Updated 4 months ago
- Stage 1: Sensitive Email/Chat Classification for Adversary Agent Emulation (espionage). This project is meant to extend Red Reaper v1 whi…☆23Updated 2 months ago
- OWASP Machine Learning Security Top 10 Project☆76Updated 2 months ago
- ☆185Updated this week
- source code for the offsecml framework☆35Updated 5 months ago
- The IoT Security Testing Guide (ISTG) provides a comprehensive methodology for penetration tests in the IoT field, offering flexibility t…☆88Updated last month
- A collection of awesome resources related AI security☆131Updated 8 months ago
- ☆99Updated 5 months ago
- ☆62Updated last month
- AI featured threat modeling and security review action☆40Updated this week
- ☆61Updated 3 weeks ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆43Updated this week
- CTF challenges designed and implemented in machine learning applications☆114Updated 2 months ago
- ☆20Updated 2 months ago
- Central repo for talks and presentations☆43Updated 3 months ago
- A LLM explicitly designed for getting hacked☆130Updated last year
- A utility to inspect, validate, sign and verify machine learning model files.☆42Updated last week
- ☆50Updated last year