wearetyomsmnv / Awesome-LLMSecOpsLinks
LLM | Security | Operations in one github repo with good links and pictures.
☆63Updated 10 months ago
Alternatives and similar repositories for Awesome-LLMSecOps
Users that are interested in Awesome-LLMSecOps are comparing it to the libraries listed below
Sorting:
- Payloads for Attacking Large Language Models☆104Updated 4 months ago
- https://arxiv.org/abs/2412.02776☆64Updated 10 months ago
- ☆99Updated 3 weeks ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆166Updated 2 years ago
- All things specific to LLM Red Teaming Generative AI☆29Updated last year
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆32Updated 10 months ago
- Prototype of Full Agentic Application Security Testing, FAAST = SAST + DAST + LLM agents☆64Updated 6 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆142Updated 10 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆27Updated last year
- Manual Prompt Injection / Red Teaming Tool☆44Updated last year
- Top 10 for Agentic AI (AI Agent Security) serves as the core for OWASP and CSA Red teaming work☆145Updated 3 weeks ago
- LLM Testing Findings Templates☆74Updated last year
- SourceGPT - prompt manager and source code analyzer built on top of ChatGPT as the oracle☆109Updated 2 years ago
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆87Updated this week
- using ML models for red teaming☆44Updated 2 years ago
- ☆91Updated last week
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆88Updated 5 months ago
- A LLM explicitly designed for getting hacked☆162Updated 2 years ago
- 🤖 A GitHub action that leverages fabric patterns through an agent-based approach☆32Updated 9 months ago
- Code snippets to reproduce MCP tool poisoning attacks.☆183Updated 6 months ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆73Updated last year
- A very simple open source implementation of Google's Project Naptime☆172Updated 7 months ago
- Secure Jupyter Notebooks and Experimentation Environment☆84Updated 8 months ago
- We present MAPTA, a multi-agent system for autonomous web application security assessment that combines large language model orchestratio…☆69Updated 2 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆114Updated last year
- A collection of prompt injection mitigation techniques.☆24Updated 2 years ago
- An example vulnerable app that integrates an LLM☆24Updated last year
- Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external to…☆32Updated last year
- Curated resources, research, and tools for securing AI systems☆156Updated last week
- An interactive CLI application for interacting with authenticated Jupyter instances.☆55Updated 5 months ago