wunderwuzzi23 / mlattacksLinks
Machine Learning Attack Series
☆68Updated last year
Alternatives and similar repositories for mlattacks
Users that are interested in mlattacks are comparing it to the libraries listed below
Sorting:
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆145Updated last year
- Codebase of https://arxiv.org/abs/2410.14923☆50Updated 10 months ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆55Updated 4 months ago
- ☆148Updated 2 weeks ago
- My inputs for the LLM Gandalf made by Lakera☆47Updated 2 years ago
- A JupyterLab extension to evaluate the security of your Jupyter environment☆39Updated 2 years ago
- Dropbox LLM Security research code and results☆235Updated last year
- A security-first linter for code that shouldn't need linting☆16Updated 2 years ago
- Code for the paper "Defeating Prompt Injections by Design"☆114Updated 3 months ago
- Lightweight LLM Interaction Framework☆375Updated this week
- Payloads for Attacking Large Language Models☆99Updated 3 months ago
- A utility to inspect, validate, sign and verify machine learning model files.☆58Updated 7 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆115Updated last year
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆70Updated last year
- Project LLM Verification Standard☆49Updated 4 months ago
- Secure Jupyter Notebooks and Experimentation Environment☆84Updated 7 months ago
- ☆69Updated 3 months ago
- source code for the offsecml framework☆41Updated last year
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆185Updated 5 months ago
- Central repo for talks and presentations☆46Updated last year
- An environment for testing AI agents against networks using Metasploit.☆44Updated 2 years ago
- Data Scientists Go To Jupyter☆66Updated 6 months ago
- using ML models for red teaming☆44Updated 2 years ago
- The Privacy Adversarial Framework (PAF) is a knowledge base of privacy-focused adversarial tactics and techniques. PAF is heavily inspire…☆58Updated 2 years ago
- ChainReactor is a research project that leverages AI planning to discover exploitation chains for privilege escalation on Unix systems. T…☆51Updated 10 months ago
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆78Updated 4 months ago
- BlindBox is a tool to isolate and deploy applications inside Trusted Execution Environments for privacy-by-design apps☆61Updated last year
- Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems☆205Updated last week
- Multi-agent system (MAS) hijacking demos☆33Updated last month
- ATHI — An AI Threat Modeling Framework for Policymakers☆56Updated 2 years ago