JosephTLucas / jupysecLinks
A JupyterLab extension to evaluate the security of your Jupyter environment
☆39Updated 2 years ago
Alternatives and similar repositories for jupysec
Users that are interested in jupysec are comparing it to the libraries listed below
Sorting:
- ☆71Updated 2 months ago
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆151Updated 2 years ago
- A security-first linter for code that shouldn't need linting☆17Updated 2 years ago
- Lightweight LLM Interaction Framework☆404Updated this week
- An interactive CLI application for interacting with authenticated Jupyter instances.☆55Updated 8 months ago
- Test Software for the Characterization of AI Technologies☆270Updated last week
- BlindBox is a tool to isolate and deploy applications inside Trusted Execution Environments for privacy-by-design apps☆63Updated 2 years ago
- A utility to inspect, validate, sign and verify machine learning model files.☆64Updated 11 months ago
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆83Updated 8 months ago
- Repository for CoSAI Workstream 4, Secure Design Patterns for Agentic Systems☆51Updated this week
- Secure Jupyter Notebooks and Experimentation Environment☆84Updated 11 months ago
- using ML models for red teaming☆45Updated 2 years ago
- This repository is for administrative documents for the CoSAI OASIS Open Project☆70Updated this week
- Tree of Attacks (TAP) Jailbreaking Implementation☆117Updated last year
- ATLAS tactics, techniques, and case studies data☆97Updated 3 weeks ago
- Machine Learning Attack Series☆75Updated last year
- Practical examples of "Flawed Machine Learning Security" together with ML Security best practice across the end to end stages of the mach…☆124Updated 3 years ago
- LobotoMl is a set of scripts and tools to assess production deployments of ML services☆10Updated 3 years ago
- The Privacy Adversarial Framework (PAF) is a knowledge base of privacy-focused adversarial tactics and techniques. PAF is heavily inspire…☆57Updated 2 years ago
- Code for the paper "Defeating Prompt Injections by Design"☆212Updated 7 months ago
- Dropbox LLM Security research code and results☆252Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- Use LLMs for document ranking☆160Updated 9 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆33Updated last year
- Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems☆221Updated 4 months ago
- Data Scientists Go To Jupyter☆68Updated 10 months ago
- A toolset to test data classification engines that generates mock data in various file formats, sizes and data profiles.☆43Updated 2 years ago
- Improve prompts for e.g. GPT3 and GPT-J using templates and hyperparameter optimization.☆42Updated 3 years ago
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆92Updated last week
- List of ML file formats☆65Updated last year