lakeraai / chrome-extension
Lakera - ChatGPT Data Leak Protection
☆22Updated 7 months ago
Alternatives and similar repositories for chrome-extension:
Users that are interested in chrome-extension are comparing it to the libraries listed below
- Guard your LangChain applications against prompt injection with Lakera ChainGuard.☆18Updated last month
- ☆43Updated 2 years ago
- A text embedding viewer for the Jupyter environment☆19Updated last year
- A framework-less approach to robust agent development.☆154Updated this week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated 11 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆89Updated 8 months ago
- ☆70Updated 4 months ago
- Red-Teaming Language Models with DSPy☆169Updated last week
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆333Updated 11 months ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆48Updated last year
- Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platfor…☆29Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆435Updated 4 months ago
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆172Updated last month
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated 8 months ago
- LLM security and privacy☆47Updated 4 months ago
- Every practical and proposed defense against prompt injection.☆389Updated 8 months ago
- Approximation of the Claude 3 tokenizer by inspecting generation stream☆123Updated 7 months ago
- Fiddler Auditor is a tool to evaluate language models.☆175Updated 11 months ago
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆109Updated 8 months ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆43Updated 2 months ago
- Self-hardening firewall for large language models☆263Updated 11 months ago
- The Security Toolkit for managing Generative AI(especially LLMs) and Supervised Learning processes(Learning and Inference).☆20Updated 6 months ago
- Code to break Llama Guard☆31Updated last year
- This open-source repository offers reference code for integrating workplace datastores with Cohere's LLMs, enabling developers and busine…☆147Updated 4 months ago
- Whispers in the Machine: Confidentiality in LLM-integrated Systems☆33Updated 2 weeks ago
- Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the Over…☆12Updated last year
- ☆13Updated last year
- Zero-trust AI APIs for easy and private consumption of open-source LLMs☆38Updated 6 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆264Updated last month
- 🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded…☆18Updated 7 months ago