PalisadeResearch / llm-honeypotLinks
☆52Updated 2 weeks ago
Alternatives and similar repositories for llm-honeypot
Users that are interested in llm-honeypot are comparing it to the libraries listed below
Sorting:
- A collection of prompt injection mitigation techniques.☆26Updated 2 years ago
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆201Updated 3 months ago
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆92Updated last week
- Codebase of https://arxiv.org/abs/2410.14923☆52Updated last year
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆92Updated 7 months ago
- This is a repository to experiment with MCP for security☆45Updated 11 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆152Updated last year
- https://arxiv.org/abs/2412.02776☆67Updated last year
- Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts Within GenAI Ecosystems☆221Updated 3 months ago
- ☆55Updated 8 months ago
- A knowledge source about TTPs used to target GenAI-based systems, copilots and agents☆131Updated 2 weeks ago
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆58Updated 2 years ago
- Payloads for Attacking Large Language Models☆114Updated 7 months ago
- The project serves as a strategic advisory tool, capitalizing on the ZySec series of AI models to amplify the capabilities of security pr…☆66Updated last year
- 🤖 A GitHub action that leverages fabric patterns through an agent-based approach☆32Updated last year
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆35Updated last year
- LLM | Security | Operations in one github repo with good links and pictures.☆85Updated last week
- ☆126Updated 2 weeks ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆116Updated last year
- ATLAS tactics, techniques, and case studies data☆93Updated last week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆435Updated last year
- Code for the paper "Defeating Prompt Injections by Design"☆205Updated 6 months ago
- Dropbox LLM Security research code and results☆251Updated last year
- ☆13Updated 2 years ago
- A utility to inspect, validate, sign and verify machine learning model files.☆62Updated 11 months ago
- Lightweight LLM Interaction Framework☆400Updated this week
- ☆44Updated last year
- Benchmarking LLM agents on Cyber Threat Investigation.☆109Updated 2 weeks ago
- ☆71Updated 3 weeks ago
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆82Updated 8 months ago