llmsecnet / llmsec-site
source for llmsec.net
☆12Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for llmsec-site
- Red-Teaming Language Models with DSPy☆142Updated 7 months ago
- ☆21Updated this week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆107Updated 8 months ago
- ☆63Updated this week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆316Updated 9 months ago
- Generative AI Governance for Enterprises☆14Updated last month
- Dropbox LLM Security research code and results☆217Updated 6 months ago
- Payloads for Attacking Large Language Models☆64Updated 4 months ago
- OWASP Machine Learning Security Top 10 Project☆76Updated 2 months ago
- Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs☆21Updated last year
- Every practical and proposed defense against prompt injection.☆347Updated 5 months ago
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆122Updated 11 months ago
- LLM security and privacy☆41Updated last month
- LLM | Security | Operations in one github repo with good links and pictures.☆19Updated last month
- A collection of prompt injection mitigation techniques.☆18Updated last year
- 😎 Awesome list of resources about using and building AI software development systems☆91Updated 6 months ago
- DevOps AI Assistant CLI. Ask questions about your AWS services, cloudwatch metrics, and billing.☆66Updated 3 months ago
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discover…☆34Updated 11 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆86Updated 5 months ago
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆39Updated 10 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆121Updated last year
- The project serves as a strategic advisory tool, capitalizing on the ZySec series of AI models to amplify the capabilities of security pr…☆40Updated 6 months ago
- ☆356Updated 7 months ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆47Updated last year
- Security and compliance proxy for LLM APIs☆45Updated last year
- Protection against Model Serialization Attacks☆320Updated this week
- A benchmark for prompt injection detection systems.☆87Updated 2 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆25Updated 5 months ago
- ☆20Updated 2 months ago
- Secure Jupyter Notebooks and Experimentation Environment☆56Updated this week