lakeraai / chainguardLinks
Guard your LangChain applications against prompt injection with Lakera ChainGuard.
☆24Updated 3 months ago
Alternatives and similar repositories for chainguard
Users that are interested in chainguard are comparing it to the libraries listed below
Sorting:
- A benchmark for prompt injection detection systems.☆120Updated last month
- Security and compliance proxy for LLM APIs☆47Updated last year
- Lakera - ChatGPT Data Leak Protection☆22Updated 11 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆282Updated last year
- Red-Teaming Language Models with DSPy☆198Updated 4 months ago
- Dropbox LLM Security research code and results☆227Updated last year
- Project LLM Verification Standard☆44Updated last month
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆394Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆381Updated last year
- A text embedding viewer for the Jupyter environment☆20Updated last year
- Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platfor…