lakeraai / canica
A text embedding viewer for the Jupyter environment
☆19Updated last year
Alternatives and similar repositories for canica
Users that are interested in canica are comparing it to the libraries listed below
Sorting:
- A benchmark for prompt injection detection systems.☆110Updated this week
- Lakera - ChatGPT Data Leak Protection☆22Updated 10 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆109Updated last year
- [Corca / ML] Automatically solved Gandalf AI with LLM☆50Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆369Updated last year
- Guard your LangChain applications against prompt injection with Lakera ChainGuard.☆22Updated 2 months ago
- Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platfor…☆29Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆154Updated last week
- Dropbox LLM Security research code and results☆225Updated 11 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated 11 months ago
- ☆43Updated 9 months ago
- Project LLM Verification Standard☆43Updated last year
- A prompt injection game to collect data for robust ML research☆56Updated 3 months ago
- Every practical and proposed defense against prompt injection.☆456Updated 2 months ago
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆111Updated 11 months ago
- Explore AI Supply Chain Risk with the AI Risk Database☆56Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆66Updated last year
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆94Updated last year
- This repository provides a benchmark for prompt Injection attacks and defenses☆196Updated 2 weeks ago
- Fiddler Auditor is a tool to evaluate language models.☆179Updated last year
- Dataset for the Tensor Trust project☆40Updated last year
- Code to break Llama Guard☆31Updated last year
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆86Updated last year
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆179Updated last month
- ☆45Updated 2 years ago
- OWASP Machine Learning Security Top 10 Project☆85Updated 3 months ago
- Copycat CNN☆27Updated last year
- A curated list of academic events on AI Security & Privacy☆150Updated 8 months ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆93Updated 9 months ago
- DEF CON 31 AI Village - LLMs: Loose Lips Multipliers☆10Updated last year