facebookresearch / privacy_adversarial_framework
The Privacy Adversarial Framework (PAF) is a knowledge base of privacy-focused adversarial tactics and techniques. PAF is heavily inspired by MITRE ATT&CK®.
☆56Updated last year
Alternatives and similar repositories for privacy_adversarial_framework:
Users that are interested in privacy_adversarial_framework are comparing it to the libraries listed below
- using ML models for red teaming☆42Updated last year
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated 9 months ago
- ☆119Updated 3 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆100Updated last year
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆88Updated 2 months ago
- LLM Testing Findings Templates☆66Updated last year
- Codebase of https://arxiv.org/abs/2410.14923☆44Updated 4 months ago
- Data Scientists Go To Jupyter☆62Updated 3 months ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆50Updated 11 months ago
- Secure Jupyter Notebooks and Experimentation Environment☆67Updated 3 weeks ago
- InfoSec OpenAI Examples☆19Updated last year
- OWASP Machine Learning Security Top 10 Project☆81Updated last month
- ReconPal: Leveraging NLP for Infosec☆56Updated 2 years ago
- ☆21Updated last year
- ☆37Updated 2 months ago
- Dragon-GPT uses Chat-GPT, or local LLM, to execute automatic and AI-powered threat modeling analysis on a given OWASP Threat Dragon diagr…☆33Updated this week
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆79Updated 9 months ago
- AI/ML applications have unique security threats. Project GuardRail is a set of security and privacy requirements that AI/ML applications …☆26Updated 2 months ago
- ATLAS tactics, techniques, and case studies data☆57Updated 5 months ago
- A powerful tool that leverages AI to automatically generate comprehensive security documentation for your projects☆45Updated this week
- Payloads for Attacking Large Language Models☆75Updated 7 months ago
- A benchmark for prompt injection detection systems.☆96Updated 3 weeks ago
- Explore AI Supply Chain Risk with the AI Risk Database☆52Updated 9 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆159Updated last year
- Cybersecurity of Machine Learning and Artificial Intelligence☆69Updated 2 years ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆59Updated 8 months ago
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆41Updated last week
- source code for the offsecml framework☆37Updated 8 months ago
- High signal information security sources Goggle.☆67Updated last year
- SourceGPT - prompt manager and source code analyzer built on top of ChatGPT as the oracle☆108Updated last year