wearetyomsmnv / Awesome-LLMSecOpsLinks
LLM | Security | Operations in one github repo with good links and pictures.
☆29Updated 5 months ago
Alternatives and similar repositories for Awesome-LLMSecOps
Users that are interested in Awesome-LLMSecOps are comparing it to the libraries listed below
Sorting:
- Top 10 for Agentic AI (AI Agent Security)☆110Updated last week
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆53Updated last month
- using ML models for red teaming☆43Updated last year
- ☆40Updated 8 months ago
- A library to produce cybersecurity exploitation routes (exploit flows). Inspired by TensorFlow.☆35Updated last year
- A collection of prompt injection mitigation techniques.☆23Updated last year
- All things specific to LLM Red Teaming Generative AI☆25Updated 7 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆23Updated last year
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆278Updated last year
- https://arxiv.org/abs/2412.02776☆54Updated 6 months ago
- ☆76Updated 3 weeks ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Bench☆77Updated last month
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆23Updated last year
- Payloads for Attacking Large Language Models☆89Updated 10 months ago
- Bundle of security analysis scripts for keras tensorflow models☆14Updated last year
- XBOW Validation Benchmarks☆92Updated last week
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆80Updated 4 months ago
- This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking cour…☆78Updated last month
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆113Updated 5 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆110Updated last year
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆63Updated 11 months ago
- LLM Testing Findings Templates☆72Updated last year
- Code snippets to reproduce MCP tool poisoning attacks.☆132Updated last month
- future-proof vulnerability detection benchmark, based on CVEs in open-source repos☆56Updated last week
- Secure Jupyter Notebooks and Experimentation Environment☆75Updated 3 months ago
- ☆53Updated 3 weeks ago
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆68Updated last week
- ☆65Updated 4 months ago
- LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.☆11Updated 7 months ago
- A LLM explicitly designed for getting hacked☆149Updated last year