wunderwuzzi23 / mlattacks
Machine Learning Attack Series
☆56Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for mlattacks
- A JupyterLab extension to evaluate the security of your Jupyter environment☆39Updated last year
- A security-first linter for code that shouldn't need linting☆16Updated last year
- Practical examples of "Flawed Machine Learning Security" together with ML Security best practice across the end to end stages of the mach…☆101Updated 2 years ago
- Central repo for talks and presentations☆43Updated 4 months ago
- Payloads for Attacking Large Language Models☆64Updated 4 months ago
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆121Updated 10 months ago
- Codebase of https://arxiv.org/abs/2410.14923☆30Updated last month
- DevOps AI Assistant CLI. Ask questions about your AWS services, cloudwatch metrics, and billing.☆65Updated 3 months ago
- Dropbox LLM Security research code and results☆217Updated 6 months ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆46Updated last year
- A utility to inspect, validate, sign and verify machine learning model files.☆42Updated 2 weeks ago
- My inputs for the LLM Gandalf made by Lakera☆36Updated last year
- Render notebooks like nbviewer, but using Quarto as the renderer☆56Updated 6 months ago
- ☆22Updated 9 months ago
- future-proof vulnerability detection benchmark, based on CVEs in open-source repos☆44Updated last week
- Project LLM Verification Standard☆36Updated 7 months ago
- An environment for testing AI agents against networks using Metasploit.☆37Updated last year
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆48Updated 5 months ago
- Lightweight LLM Interaction Framework☆210Updated this week
- ☆61Updated 3 weeks ago
- source code for the offsecml framework☆35Updated 5 months ago
- LLM plugin for models hosted by Anyscale Endpoints☆32Updated 7 months ago
- Fiddler Auditor is a tool to evaluate language models.☆171Updated 8 months ago
- ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications☆193Updated 8 months ago
- source for llmsec.net☆12Updated 3 months ago
- BlindBox is a tool to isolate and deploy applications inside Trusted Execution Environments for privacy-by-design apps☆57Updated last year
- Tree of Attacks (TAP) Jailbreaking Implementation☆95Updated 9 months ago
- Your buddy in the (L)LM space.☆63Updated 2 months ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆47Updated 8 months ago
- using ML models for red teaming☆39Updated last year