protectai / nbdefense-jupyterLinks
☆12Updated last year
Alternatives and similar repositories for nbdefense-jupyter
Users that are interested in nbdefense-jupyter are comparing it to the libraries listed below
Sorting:
- Secure Jupyter Notebooks and Experimentation Environment☆84Updated 7 months ago
- ATLAS tactics, techniques, and case studies data☆80Updated this week
- A collection of prompt injection mitigation techniques.☆24Updated 2 years ago
- LLM prompt attacks for hacker CTFs via CTFd.☆13Updated last year
- All things specific to LLM Red Teaming Generative AI☆29Updated 11 months ago
- DEF CON 31 AI Village - LLMs: Loose Lips Multipliers☆10Updated 2 years ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆55Updated 4 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆115Updated last year
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆79Updated this week
- ☆23Updated last year
- Data Scientists Go To Jupyter☆66Updated 7 months ago
- Codebase of https://arxiv.org/abs/2410.14923☆51Updated 11 months ago
- AI featured threat modeling and security review project☆16Updated 10 months ago
- A fun POC that is built to understand AI security agents.☆33Updated 9 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆26Updated last year
- Payloads for Attacking Large Language Models☆101Updated 4 months ago
- ☆42Updated 9 months ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆71Updated last year
- A benchmark for prompt injection detection systems.☆137Updated last month
- Dropbox LLM Security research code and results☆235Updated last year
- https://arxiv.org/abs/2412.02776☆61Updated 9 months ago
- ☆151Updated 3 weeks ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆32Updated last year
- Machine Learning Attack Series☆68Updated last year
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆138Updated 9 months ago
- ☆27Updated 2 years ago
- source code for the offsecml framework☆41Updated last year
- using ML models for red teaming☆44Updated 2 years ago
- Code for the paper "Defeating Prompt Injections by Design"☆118Updated 3 months ago
- ☆19Updated 9 months ago