alkaet / LobotoMlLinks
LobotoMl is a set of scripts and tools to assess production deployments of ML services
☆10Updated 3 years ago
Alternatives and similar repositories for LobotoMl
Users that are interested in LobotoMl are comparing it to the libraries listed below
Sorting:
- future-proof vulnerability detection benchmark, based on CVEs in open-source repos☆63Updated this week
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents on…☆100Updated 2 months ago
- ☆66Updated 3 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- ☆179Updated 6 months ago
- Data Scientists Go To Jupyter☆68Updated 9 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆116Updated last year
- ☆98Updated 4 months ago
- ☆101Updated 2 months ago
- Python library for Adversarial ML Evaluation☆24Updated 5 months ago
- Code for the paper "Defeating Prompt Injections by Design"☆179Updated 5 months ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Bench☆110Updated last month
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆332Updated last year
- ☆14Updated last year
- A benchmark for prompt injection detection systems.☆151Updated 3 months ago
- Arxiv + Notion Sync☆20Updated 7 months ago
- ☆153Updated 3 months ago
- General research for Dreadnode☆27Updated last year
- An environment for testing AI agents against networks using Metasploit.☆45Updated 2 years ago
- All things specific to LLM Red Teaming Generative AI☆29Updated last year
- using ML models for red teaming☆45Updated 2 years ago
- Payloads for Attacking Large Language Models☆114Updated 6 months ago
- Dropbox LLM Security research code and results☆250Updated last year
- CyberBench: A Multi-Task Cyber LLM Benchmark☆27Updated 7 months ago
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆125Updated last month
- A utility to inspect, validate, sign and verify machine learning model files.☆61Updated 10 months ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆92Updated last year
- Example agents for the Dreadnode platform☆20Updated 3 weeks ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆33Updated last year
- Multi-agent system (MAS) hijacking demos☆39Updated last week