trailofbits / ml-file-formats
List of ML file formats
☆51Updated last year
Alternatives and similar repositories for ml-file-formats:
Users that are interested in ml-file-formats are comparing it to the libraries listed below
- A JupyterLab extension to evaluate the security of your Jupyter environment☆39Updated last year
- Supply chain security for ML☆154Updated this week
- The Privacy Adversarial Framework (PAF) is a knowledge base of privacy-focused adversarial tactics and techniques. PAF is heavily inspire…☆56Updated last year
- A security-first linter for code that shouldn't need linting☆16Updated last year
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated 10 months ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆53Updated last year
- using ML models for red teaming☆43Updated last year
- ATLAS tactics, techniques, and case studies data☆63Updated last month
- LLM | Security | Operations in one github repo with good links and pictures.☆28Updated 3 months ago
- Precaution CLI - command line static application security testing tool☆23Updated last week
- Secure Jupyter Notebooks and Experimentation Environment☆74Updated 2 months ago
- BlindBox is a tool to isolate and deploy applications inside Trusted Execution Environments for privacy-by-design apps☆57Updated last year
- ☆21Updated last week
- A collection of prompt injection mitigation techniques.☆22Updated last year
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆136Updated last year
- Security and compliance proxy for LLM APIs☆46Updated last year
- source for llmsec.net☆15Updated 9 months ago
- ☆64Updated 3 months ago
- Code Pathfinder, the open-source alternative to GitHub CodeQL built with GoLang. Built for advanced structural search, derive insights, f…☆58Updated this week
- Data Scientists Go To Jupyter☆62Updated last month
- Payloads for Attacking Large Language Models☆79Updated 9 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆106Updated last year
- A benchmark for prompt injection detection systems.☆100Updated 2 months ago
- CodeQL Security Queries☆26Updated this week
- The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.☆22Updated 5 months ago
- Automatically scan new pypi packages for potentially malicious code☆30Updated last year
- LLM security and privacy☆48Updated 6 months ago
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆67Updated 4 months ago
- ☆31Updated 5 months ago
- ☆127Updated 5 months ago