bayegaspard / GoldMineLinks
AI-powered tool designed to help security professionals detect vulnerabilities at machine speed and extract insights from extensive bug bounty reports. By leveraging Generative AI and Retrieval-Augmented Generation (RAG), GoldMine supercharges red-teaming operations and keeps you ahead in the cyber battle.
☆18Updated last year
Alternatives and similar repositories for GoldMine
Users that are interested in GoldMine are comparing it to the libraries listed below
Sorting:
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆310Updated last year
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆90Updated last week
- ☆277Updated 3 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆166Updated 2 years ago
- ☆42Updated 11 months ago
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆34Updated 11 months ago
- A research platform to develop automated security policies using quantitative methods, e.g., optimal control, computational game theory, …☆139Updated this week
- ☆113Updated this week
- A powerful tool that leverages AI to automatically generate comprehensive security documentation for your projects☆98Updated last month
- This repository contains resources and materials for the "AI Agents and Retrieval Augmented Generation (RAG) for Cybersecurity Operations…☆116Updated last month
- Curated resources, research, and tools for securing AI systems☆206Updated 2 weeks ago
- A curated list of LLM driven Cyber security Resources☆42Updated last month
- ☆38Updated 11 months ago
- NOVA: The Prompt Pattern Matching☆57Updated last month
- AIGoat: A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.☆260Updated 2 months ago
- ☆344Updated 2 months ago
- A LLM explicitly designed for getting hacked☆163Updated 2 years ago
- An experimental project using LLM technology to generate security documentation for Open Source Software (OSS) projects☆34Updated 9 months ago
- ☆55Updated 7 months ago
- Payloads for Attacking Large Language Models☆112Updated 6 months ago
- Curated list of Open Source project focused on LLM security☆67Updated last year
- One Conference 2024☆111Updated last year
- The project serves as a strategic advisory tool, capitalizing on the ZySec series of AI models to amplify the capabilities of security pr…☆64Updated last year
- A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.☆221Updated last year
- ATLAS tactics, techniques, and case studies data☆89Updated 2 weeks ago
- Reference notes for Attacking and Defending Generative AI presentation☆67Updated last year
- Adversarial AI - Attacks, Mitigations, and Defense Strategies, published by Packt☆67Updated last year
- ☆100Updated 2 weeks ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆404Updated 4 months ago
- A Risk-Based Prioritization Taxonomy for prioritizing CVEs (Common Vulnerabilities and Exposures).☆81Updated last year