bayegaspard / GoldMineLinks
AI-powered tool designed to help security professionals detect vulnerabilities at machine speed and extract insights from extensive bug bounty reports. By leveraging Generative AI and Retrieval-Augmented Generation (RAG), GoldMine supercharges red-teaming operations and keeps you ahead in the cyber battle.
☆17Updated last year
Alternatives and similar repositories for GoldMine
Users that are interested in GoldMine are comparing it to the libraries listed below
Sorting:
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆164Updated last year
- ☆258Updated 3 weeks ago
- ☆70Updated last week
- A research platform to develop automated security policies using quantitative methods, e.g., optimal control, computational game theory, …☆131Updated this week
- ☆56Updated 4 months ago
- ATLAS tactics, techniques, and case studies data☆80Updated last month
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆77Updated this week
- NOVA: The Prompt Pattern Matching☆175Updated 2 months ago
- LLMBUS AI red team tool 🚍☆73Updated last month
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆302Updated last year
- AIGoat: A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.☆250Updated last week
- Curated list of Open Source project focused on LLM security☆62Updated 10 months ago
- A LLM explicitly designed for getting hacked☆160Updated 2 years ago
- CALDERA plugin for adversary emulation of AI-enabled systems☆99Updated 2 years ago
- Curated resources, research, and tools for securing AI systems☆101Updated this week
- A curated list of LLM driven Cyber security Resources☆36Updated 3 months ago
- Payloads for Attacking Large Language Models☆100Updated 3 months ago
- A powerful tool that leverages AI to automatically generate comprehensive security documentation for your projects☆91Updated 3 weeks ago
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆85Updated 4 months ago
- ☆38Updated 8 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆138Updated 9 months ago
- Dropbox LLM Security research code and results☆235Updated last year
- This repository contains resources and materials for the "AI Agents and Retrieval Augmented Generation (RAG) for Cybersecurity Operations…☆89Updated 3 weeks ago
- ☆42Updated 9 months ago
- Payloads for AI Red Teaming and beyond☆282Updated last month
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆369Updated last month
- A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.☆175Updated last year
- Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external to…☆32Updated 11 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆170Updated 2 years ago
- ☆11Updated 2 years ago