0din-ai / 0Din-Curated-Monthly-White-Papers
This repository curates a collection of monthly white papers focused on the latest LLM attack and defenses.
☆22Updated 4 months ago
Alternatives and similar repositories for 0Din-Curated-Monthly-White-Papers:
Users that are interested in 0Din-Curated-Monthly-White-Papers are comparing it to the libraries listed below
- A collection of prompt injection mitigation techniques.☆20Updated last year
- Payloads for Attacking Large Language Models☆75Updated 7 months ago
- A LLM explicitly designed for getting hacked☆137Updated last year
- future-proof vulnerability detection benchmark, based on CVEs in open-source repos☆46Updated this week
- Integrate PyRIT in existing tools☆13Updated 2 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆140Updated last year
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆84Updated 2 months ago
- This repository contains various attack against Large Language Models.☆94Updated 9 months ago
- 🤖🛡️🔍🔒 🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆22Updated 9 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆99Updated last year
- OWASP Top 10 for Agentic AI (AI Agent Security) - Pre-release version☆53Updated this week
- Red-Teaming Language Models with DSPy☆169Updated last week
- ☆198Updated last year
- A collection of awesome resources related AI security☆175Updated 3 weeks ago
- Multi-Lingual GenAI Red Teaming Tool☆23Updated 6 months ago
- LLM | Security | Operations in one github repo with good links and pictures.☆24Updated last month
- General research for Dreadnode☆19Updated 8 months ago
- ☆64Updated last month
- Prompt Injections Everywhere☆103Updated 6 months ago
- LLM Testing Findings Templates☆66Updated last year
- https://arxiv.org/abs/2412.02776☆47Updated 2 months ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆50Updated 11 months ago
- A research project to add some brrrrrr to Burp☆129Updated 2 weeks ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆58Updated 8 months ago
- Data Scientists Go To Jupyter☆62Updated 2 months ago
- Learn about a type of vulnerability that specifically targets machine learning models☆222Updated 8 months ago
- AI/ML applications have unique security threats. Project GuardRail is a set of security and privacy requirements that AI/ML applications …☆26Updated last month
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆435Updated 4 months ago
- A Completely Modular LLM Reverse Engineering, Red Teaming, and Vulnerability Research Framework.☆46Updated 3 months ago