Protection against Model Serialization Attacks
☆646Feb 18, 2026Updated last week
Alternatives and similar repositories for modelscan
Users that are interested in modelscan are comparing it to the libraries listed below
Sorting:
- Secure Jupyter Notebooks and Experimentation Environment☆85Feb 6, 2025Updated last year
- Security scanner detecting Python Pickle files performing suspicious actions☆393Feb 16, 2026Updated last week
- The Security Toolkit for LLM Interactions☆2,584Dec 15, 2025Updated 2 months ago
- LLM Prompt Injection Detector☆1,423Aug 7, 2024Updated last year
- the LLM vulnerability scanner☆7,042Feb 19, 2026Updated last week
- A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities☆1,693Oct 23, 2024Updated last year
- A Python pickling decompiler and static analyzer☆604Updated this week
- ☆12Oct 1, 2024Updated last year
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,468Updated this week
- Zero shot vulnerability discovery using LLMs☆2,506Feb 6, 2025Updated last year
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆423Aug 1, 2025Updated 7 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆458Jan 31, 2024Updated 2 years ago
- Crashbench is a LLM benchmark to measure bug-finding and reporting capabilities of LLMs☆14Jan 20, 2026Updated last month
- EZGHSA is a command-line tool for summarizing and filtering vulnerability alerts on Github repositories.☆35Jan 4, 2026Updated last month
- CTF challenges designed and implemented in machine learning applications☆202Oct 8, 2025Updated 4 months ago
- Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪☆1,782Feb 3, 2026Updated 3 weeks ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆635Feb 16, 2026Updated last week
- Automated vulnerability discovery and annotation☆68Jul 26, 2024Updated last year
- a security scanner for custom LLM applications☆1,134Dec 1, 2025Updated 3 months ago
- Dropbox LLM Security research code and results☆255May 21, 2024Updated last year
- New ways of breaking app-integrated LLMs☆2,053Jul 17, 2025Updated 7 months ago
- Working Group on Artificial Intelligence and Machine Learning (AI/ML) Security☆145Dec 19, 2025Updated 2 months ago
- Identify hardcoded secrets in static structured text (version 2)☆98Feb 5, 2025Updated last year
- Every practical and proposed defense against prompt injection.☆642Feb 22, 2025Updated last year
- ☆11Dec 19, 2024Updated last year
- Unauthenticated enumeration of AWS IAM Roles.☆26Sep 7, 2025Updated 5 months ago
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆29Apr 23, 2024Updated last year
- A security scanner for your LLM agentic workflows☆915Nov 27, 2025Updated 3 months ago
- a CLI that provides a generic automation layer for assessing the security of ML models☆910Jul 18, 2025Updated 7 months ago
- Practical examples of "Flawed Machine Learning Security" together with ML Security best practice across the end to end stages of the mach…☆124Jun 6, 2022Updated 3 years ago
- OWASP Foundation Web Respository☆352Updated this week
- R.A.Y.D.E.R revolutionizes security testing for generative AI by letting you test chatbots directly through their web interfaces. No API …☆15Jul 23, 2025Updated 7 months ago
- Set of tools to assess and improve LLM security.☆4,036Updated this week
- Test Software for the Characterization of AI Technologies☆278Updated this week
- RCE PoC for Tensorflow using a malicious Lambda layer☆61Jul 30, 2025Updated 7 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆455Feb 26, 2024Updated 2 years ago
- Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and…☆5,844Dec 12, 2025Updated 2 months ago
- AIGoat: A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.☆265Sep 16, 2025Updated 5 months ago
- YASA is an open-source static program analysis project. Its core innovation lies in a unified intermediate representation called UAST, d…☆249Updated this week