wunderwuzzi23 / mlattacks
Machine Learning Attack Series
☆57Updated 8 months ago
Alternatives and similar repositories for mlattacks:
Users that are interested in mlattacks are comparing it to the libraries listed below
- A security-first linter for code that shouldn't need linting☆16Updated last year
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆129Updated last year
- Payloads for Attacking Large Language Models☆72Updated 6 months ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆50Updated 10 months ago
- ☆62Updated last month
- ☆33Updated last month
- Practical examples of "Flawed Machine Learning Security" together with ML Security best practice across the end to end stages of the mach…☆105Updated 2 years ago
- A JupyterLab extension to evaluate the security of your Jupyter environment☆39Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated 10 months ago
- ChainReactor is a research project that leverages AI planning to discover exploitation chains for privilege escalation on Unix systems. T…☆41Updated 2 months ago
- BlindBox is a tool to isolate and deploy applications inside Trusted Execution Environments for privacy-by-design apps☆56Updated last year
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆60Updated last month
- Repo for the testing-genai workshop☆13Updated last week
- Secure Jupyter Notebooks and Experimentation Environment☆65Updated 2 weeks ago
- Dropbox LLM Security research code and results☆219Updated 8 months ago
- LLM plugin for models hosted by Anyscale Endpoints☆32Updated 9 months ago
- Lightweight LLM Interaction Framework☆229Updated this week
- My inputs for the LLM Gandalf made by Lakera☆41Updated last year
- Codebase of https://arxiv.org/abs/2410.14923☆34Updated 3 months ago
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆39Updated last year
- ☆23Updated 11 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated 8 months ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆56Updated 7 months ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆47Updated last year
- List of ML file formats☆44Updated 10 months ago
- Vulnerability scanner for AWS customer managed policies using ChatGPT☆143Updated last year
- A utility to inspect, validate, sign and verify machine learning model files.☆52Updated 2 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆133Updated last year
- OWASP Machine Learning Security Top 10 Project☆79Updated 4 months ago
- A LLM explicitly designed for getting hacked☆134Updated last year