GURPREETKAURJETHRA / LLM-SECURITYLinks
Securing LLM's Against Top 10 OWASP Large Language Model Vulnerabilities 2024
β17Updated last year
Alternatives and similar repositories for LLM-SECURITY
Users that are interested in LLM-SECURITY are comparing it to the libraries listed below
Sorting:
- LLM security and privacyβ49Updated 7 months ago
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β280Updated last year
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β163Updated last year
- A collection of awesome resources related AI securityβ239Updated this week
- OWASP Machine Learning Security Top 10 Projectβ85Updated 4 months ago
- Payloads for Attacking Large Language Modelsβ90Updated this week
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.β320Updated 5 months ago
- A LLM explicitly designed for getting hackedβ149Updated last year
- AI-enabled Cybersecurity for Future Smart Environmentsβ24Updated 10 months ago
- This repository provides a benchmark for prompt Injection attacks and defensesβ216Updated last week
- LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.β11Updated 7 months ago
- Prompt Injections Everywhereβ126Updated 10 months ago
- Dropbox LLM Security research code and resultsβ228Updated last year
- All things specific to LLM Red Teaming Generative AIβ25Updated 7 months ago
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.β60Updated 6 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilitiesβ30Updated last year
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ389Updated last year
- β44Updated last month
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wriβ¦β22Updated 5 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.β287Updated 9 months ago
- LLM | Security | Operations in one github repo with good links and pictures.β30Updated 5 months ago
- Prompt Injection Primer for Engineersβ435Updated last year
- A collection of prompt injection mitigation techniques.β23Updated last year
- CTF challenges designed and implemented in machine learning applicationsβ155Updated 9 months ago
- The fastest Trust Layer for AI Agentsβ136Updated last week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).β110Updated last year
- Project LLM Verification Standardβ44Updated 3 weeks ago
- π€π‘οΈπππ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.β23Updated last year
- A curated list of academic events on AI Security & Privacyβ152Updated 9 months ago
- Whispers in the Machine: Confidentiality in Agentic Systemsβ37Updated 2 weeks ago