facebookresearch / privacy_adversarial_framework
The Privacy Adversarial Framework (PAF) is a knowledge base of privacy-focused adversarial tactics and techniques. PAF is heavily inspired by MITRE ATT&CK®.
☆56Updated last year
Alternatives and similar repositories for privacy_adversarial_framework:
Users that are interested in privacy_adversarial_framework are comparing it to the libraries listed below
- Tree of Attacks (TAP) Jailbreaking Implementation☆99Updated 11 months ago
- using ML models for red teaming☆39Updated last year
- Data Scientists Go To Jupyter☆62Updated 2 months ago
- A collection of prompt injection mitigation techniques.☆20Updated last year
- LLM Testing Findings Templates☆66Updated 11 months ago
- source code for the offsecml framework☆37Updated 7 months ago
- ☆33Updated last month
- A utility to inspect, validate, sign and verify machine learning model files.☆52Updated 2 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated 8 months ago
- AI/ML applications have unique security threats. Project GuardRail is a set of security and privacy requirements that AI/ML applications …☆25Updated 3 weeks ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆156Updated last year
- Payloads for Attacking Large Language Models☆72Updated 6 months ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆56Updated 7 months ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆50Updated 10 months ago
- ☆114Updated 2 months ago
- Stage 1: Sensitive Email/Chat Classification for Adversary Agent Emulation (espionage). This project is meant to extend Red Reaper v1 whi…☆36Updated 5 months ago
- ☆22Updated 11 months ago
- InfoSec OpenAI Examples☆19Updated last year
- A LLM explicitly designed for getting hacked☆134Updated last year
- A benchmark for prompt injection detection systems.☆95Updated 4 months ago
- future-proof vulnerability detection benchmark, based on CVEs in open-source repos☆46Updated last week
- ☆23Updated 11 months ago
- CALDERA plugin for adversary emulation of AI-enabled systems☆87Updated last year
- A powerful tool that leverages AI to automatically generate comprehensive security documentation for your projects☆25Updated this week
- Build a CVE library with aggregated CISA, EPSS and CVSS data☆27Updated last year
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆41Updated 9 months ago
- ☆101Updated 7 months ago
- A Completely Modular LLM Reverse Engineering, Red Teaming, and Vulnerability Research Framework.☆45Updated 2 months ago
- ReconPal: Leveraging NLP for Infosec☆55Updated 2 years ago
- ATLAS tactics, techniques, and case studies data☆54Updated 3 months ago